Skip to content

Choosing between Humans an AI in structuring the organisation for the emerging digital value creating paradigm

For some tasks humans are better placed, for some tasks machines/software is the better option and more and more a quality partnership between the two offers a superior outcome. Choosing the quality path, driven by a deeper understanding of the potential of both humans and AI will not only create a truly adaptive workplace, but will also better inform investors, designers, policy makers and users that make up its ecosystem.

As most organisations are moving towards becoming digitally enabled and towards a new business logic within the digital value creating paradigm they will have to reflect on the division of tasks between humans and machines/software. This reflection should be driven not by the promises of technology companies or our tendency to technologize1 , but by accurate information regarding the advantages brought by both to each situation. Both humans and AI2 offer unique advantages, the deployment or combination of which need to be better understood. For some tasks humans are better placed, for some tasks machines/software is the better option and more and more a quality partnership between the two offers a superior outcome.

Regarding the use of technology, as it develops there will be a larger share of tasks for which machines/software will be most suitable. However, when implementing new technologies there are several variables that need to be considered, and as is historically the case, adoption is ahead of the curve of our knowledge regarding ramifications of such decisions. One of the areas that needs to be better understood and considered is the complexity of human-technological interaction. The lack of rigour and knowledge concerning this impact leads to such problems as a lack of clear identification of the benefits which humans and technology bring to a situation (thus not maximising the use of either) and a lack of consideration of the emergent and unpredictable behaviour that results from human/technology interaction (a good example here was Microsoft’s Tay3 ).

Another area that needs to be explored is the lag time between technologization of a task or service and the identification of issues generated by the changes, or the emergence of new tasks driven by problems generated from the execution of old tasks by machines/software and/or humans (an example of such a task is the forensic AI process of understanding what machine learning has learnt – specifically when something goes wrong).

A predominant issue here is to ensure accuracy in the measures that are applied to detecting, analysing and categorising both success and failure of outcomes as we adopt new technology solutions to understand and improve our use of AI. Instead the validity and neutrality of current measures can be lacking, driven instead by short term economic gain, market share or a lack of knowledge, and the outcome risks being a minimisation of quality outcomes and partnerships between humans and AI in the organisation.

Models and big data

When deploying new technologies like those being developed and launched under the overall heading of Artificial Intelligence (AI) it is important to understand their strength and weaknesses. One frequent statement, in one form or other, is that the models created are made to be applied not to be understood. One challenge in understanding these models are that they are specified by millions of different coefficients and hence it is very difficult, bordering on impossible, to understand why a given model behaves the way it does4. A strength in these types of tools is in their ability to extract increased effectiveness from raw data, and the larger the data set the more detail can be extracted from it5.

However, a weakness is that this is done via algorithms and formula designed by people who then build in their own biases, and the need to create rules that shape, trim and quantify data to get it into a digital (and frequently linear) coding format. Thus, a level of inaccuracy and bias will be introduced into the collection and use of data extraction, patterning and extrapolation, quite apart from the wider issues of lack of contextual data and algorithm design6.

Perpetual loops – vicious cycles

There is still other technical issues regarding how these models interpret data feed and patterns, with examples of weaknesses and error sources discussed in technical papers and sources7. However we must not forget that the advances in AI are actually created by the humans programming them (currently anyway) and thus we combine the weaknesses of both human cognitive bias and linear, non-contextual predictive algorithms to magnify and perpetuate existing problems rather than solving them8.

Strengths and weaknesses

The first insight is that AI systems, like all tools, have both strengths and weaknesses and that these need to be understood before they are put to use. A second insight is that humans also have both strengths and weaknesses, and there are times when a human is more effective and efficient than AI.

These two insights become highly relevant when we consider that almost all work tasks are being, or will be, executed by a tool-person pair (whether or not one of the parties is visible at the time). There are several things to consider here. As above, the combination of the two can result in a positive synchrony if the advantages of both are maximised in a quality partnership which augments the contextual, abstractive thought of a human with the huge reach, speed and aggregative power of AI. Conversely, a quantity relationship fragments such synergy, instead aiming for the lowest common denominator by breaking down tasks, digitising where possible and using humans in purely transactional ways to fill in the blanks AI currently cannot do, no matter how mindless. These new crowd-work platforms have growing issues of pay8, communication (with both the employer and other humans) and agency in regard to tasks and conditions.

A major factor then in choosing how to combine humans and AI is the driver of the process or outcome. In the emerging market we often see a midway compromise of building a quality or quantity partnership due to several factors, and these vary not only across organisations but countries. Something that only some organisations and countries are considering is that these impacts are cumulative, often setting an organisation, or an economy, on a path dependent trajectory that will see them reap the best or worst outcomes of technologization over time (for example a quantity partnership choice will maximise profit initially, but may lock the company or economy into a transactional, reductionist path of AI use over time).

A compounding factor here is that in this new digital value creating paradigm the relative control over this pair is moving from the person to the tool. This can be seen in modern car service where the diagnostic computer system informs the mechanic exactly what is to be done, sometimes how it is to be done and verifies that it is correctly done. This is to be compared with the previous way in which this was done where the mechanic was in charge making all these decisions and choosing the appropriate tools for the job – a largely self-correcting process driven by experiential knowledge and small, constant corrections – a process difficult to program due to its nuanced and extrapolative nature.

So how should the tasks be divided between man and machine?

This question should be asked continuously as the workplace becomes more technologized to maintain maximum value from, and for, humans in the use of AI. Several things should be taken into consideration. AI has obvious advantages in tasks that use linear or complicated logic, for deployment in unsafe, high precision or repetitive conditions, etc. These capabilities, when paired with the human capacity for complex thought and extrapolation, offer huge advantages in a number of fields (already evident in medicine, engineering, defence, tech design). Yet we must be cognisant of what AI is not good at, at least not on its own.

In a technical sense, Simon9 described decision making on a continuous scale of programmability, predicting that computers would replace humans in decision making with high programmability leaving humans to deal with decisions on the low programmability end of the scale especially those involving judgement and interpersonal communication. Studies such as Levy & Murnane examine which tasks computers perform better than humans, and which tasks humans perform better than computers. They broadly conclude that computers have inherent advantages in tasks that depend on rule-based decision making and simple pattern recognition whereas humans have, given the right skill, inherent advantages in tasks involving complex communication, problem solving, and expert thinking. Brynjolfsson & McAfee argue that computers are on the verge and in some instances have already surpassed humans as relates to some of the tasks identified by Levy & Murnane as tasks where humans would outperform computers10.

One of the problems with much of this assessment is that how the performance quality of either human or AI is judged is open to inaccuracy for several reasons, a primary one being the goal or driver of the measurement. We already see short term profit maximisation downplaying the need for quality judgement and communication, with the resultant poor outcomes taken as an inevitability – thus even such technical categorisation as ‘high programmability’ can be skewed depending on goal drivers. Breaking down tasks in order to digitise them (to make them cheaper) oversimplifies any nuanced connections between parts of the task, and no longer takes into consideration what man and machine offer to the outcome. It is possible to track the emerging work division between humans and AI in terms of those tasks where the value for money of humans exceed that of computers by looking at what tasks are put up for execution at Amazon Mechanical Turk which de facto is an online market place for this category of tasks.

The human advantage

In the rush to technologize, the lack of AI’s ability to contextualise, extrapolate, empathise and create is a limitation not yet even touched by the promise of such things as quantum computing and neuromorphic programming – even creative AI is still linear, and its parallel processes are created by humans. That is not to ignore the huge advances in machine learning and the introduction of such things as algorithms for unpredictability, but the current most complex artificial neural network still comes nowhere near the human brain’s 100 billion neurons and 5 quadrillion parallel connections. And this does not even begin to touch on the topic of the neurophysiological impact of human interaction, with humans and with and through technology. This area of work is uncovering profound impacts of direct human interaction on everything from collaborative, creative outcomes and building of trust and empathy, to the more obscure changes in things like complex problem solving, logic paths, immune system efficiency and growing new brain – all important aspects to consider when deciding on resource use for a particular problem or task11.

Another problem when trying to technologize a process or methodology to minimise cost or maximise scale-up capability is that of moving to take the human out of the loop without a clear understanding of their value within it, especially as that value is not always obvious and hardly ever measured using holistic parameters.

A considered path

Given that organisations are aimed at achieving high levels of both efficiency and effectiveness it is critical that the structuring of the teams of individual-tool is done well. Given that the workplace is also one that is to provide a human centric environment where people desire to spend time, we must further understand the intricacies of technology to help shape this human centric future. This means that there must be much greater understanding and agreement around what such a human-centric future looks like, and a much more nuanced approach about what both humans and technology offer in any given situation. Humans have benefitted immensely from the invention and use of tools, and like any other tool AI can enable huge advantages, whether used in isolation or partnered with humans depending on the situation. Choosing the quality path, driven by a deeper understanding of the potential of both humans and AI will not only create a truly adaptive workplace, but will also better inform investors, designers, policy makers and users that make up its ecosystem.

1 Technologization is the drive to ‘make technological’; to modernise or modify with technology. It is driven by a technological optimism which assumes technologization will always be an improvement.

2 Artificial Intelligence (AI) was founded as an academic discipline in 1956, and as most new fields have gone through sequential hype curves of optimism followed by disappointment as the field and its subfields have passed key thresholds of insights. AI as made up of several loosely connected subfields grounded in specific technical domains or with a focus to address key problem domains or for some domains even grounded in key philosophical approaches to problems. Some of the key domains with major potential applications in firms are: Machine learning (computer algorithms that improve automatically through experience), Machine perception including speech recognition, facial recognition, object recognition (including facial recognition) and computer vision. This is fundamentally that ability to capture input from different sensors and from these deduce the answer to a specific question like who is this? The last domain with present impact is motion and manipulation as applied in robotics.                                                                                                                                                             

3 Tay was an artificial intelligence chatter bot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only 16 hours after its launch.

4 Some tools show promises here e.g. Krause, J., Perer, A., & Bertini, E. (2016). Using visual analytics to interpret predictive machine learning models. https://arxiv.org/pdf/1606.05685.pdf.

5Halevy, A., Norvig, P., & Pereira, F. (2009). The unreasonable effectiveness of data. IEEE Intelligent Systems, 24(2), 8-12.

6 This begins at the point of collection in that the data sets may be taken from non-representative or insufficiently diverse populations, introducing the first bias (whether this is limited by the collectors not casting a sufficiently wide net, or the data sets being self-bounded such as expecting to find diversity in a social media feed). Another issue here is that the data is by nature skewed towards the method of collection, coding and storage – much data collection is highly quantitative as it requires reduction to codeability, thus removing much of the qualitative data which provides context, nuance and depth (some say quality). A third issue is the algorithms that drive extraction and extrapolation of data patterns and aggregation. These are written by humans, complete with their own biases and heuristics that shape the process – hence we see search engines that discriminate on the basis of age, gender, skin colour or socio-economic status depending on where the algorithm originates (with much current discussion around this being a very particular group in Silicon Valley who design over 95% (check) of search algorithms) (ref). All of these factors can skew the extrapolation and conclusions drawn, and results in the bubbles, fake news, deep fake, and tunnel vision we see ever more frequently.

https://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.html; http://rocknrollnerd.github.io/ml/2015/05/27/leopard-sofa.html; Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. https://arxiv.org/pdf/1610.08401v1.pdf.
7 Mullainathan, S., & Obermeyer, Z. (2017). Does machine learning automate moral hazard and error? American Economic Review, 107(5), 476-80.

8 Amazon’s Mechanical Turk pays as little as a few cents per job, with 91% of workers earning under $8 an hour (Pew Research, 2017).

9 Simon, H. (1965). The shape of automation for men and management. New York, NY: Harper and Row; Simon, H. A. (1967). Programs as factors of production. California Management Review, 10(2), 15-22.
Levy, F., & Murnane, R. J. (2005). The new division of labor: How computers are creating the next job market. Princeton University Press.

10 Brynjolfsson, E., & McAfee, A. (2012). Winning the race with ever-smarter machines. MIT Sloan Management Review, 53(2), 53.

11 Kerr, F. (2014). Creating and leading adaptive organisations: the nature and practice of emergent logic (Doctoral dissertation). Adelaide, SA, Australia: University of Adelaide. 

Written by Göran Roos and Fiona Kerr, Neuro Tech Institute
Ir al contenido