Curiosity has been hailed as one of the most critical competencies for the modern workplace. It’s been shown to boost people’s employability. Countries with higher curiosity enjoy more economic and political freedom, as well as higher GDPs. It is therefore not surprising that, as future jobs become less predictable, a growing number of organizations will hire individuals based on what they could learn, rather than on what they already know.
Of course, people’s careers are still largely dependent on their academic achievements, which are (at least partly) a result of their curiosity. Since no skill can be learned without a minimum level of interest, curiosity may be considered one of the critical foundations of talent. As Albert Einstein famously noted, “I have no special talent. I am only passionately curious.”
Curiosity is only made more important for people’s careers by the growing automation of jobs. At this year’s World Economic Forum, ManpowerGroup predicted that learnability, the desire to adapt one’s skill set to remain employable throughout one’s working life, is a key antidote to automation. Those who are more willing and able to upskill and develop new expertise are less likely to be automated. In other words, the wider the range of skills and abilities you acquire, the more relevant you will remain in the workplace. Conversely, if you’re focused on optimizing your performance, your job will eventually consist of repetitive and standardized actions that could be better executed by a machine.
But what if AI were capable of being curious?
As a matter of fact, AI’s desire to learn a directed task cannot be overstated. Most AI problems comprise defining an objective or goal that becomes the computer’s number one priority. To appreciate the force of this motivation, just imagine if your desire to learn something ranked highest among all your motivational priorities, above any social status or even your physiological needs. In that sense, AI is way more obsessed with learning than humans are.
At the same time, AI is constrained in what it can learn. Its focus and scope are very narrow compared to that of a human, and its insatiable learning appetite applies only to extrinsic directives — learn X, Y, or Z. This is in stark contrast to AI’s inability to self-direct or be intrinsically curious. In that sense, artificial curiosity is the exact opposite of human curiosity; people are rarely curious about something because they are told to be. Yet this is arguably the biggest downside to human curiosity: It is free-flowing and capricious, so we cannot boost it at will, either in ourselves or in others.
To some degree, most of the complex tasks that AI has automated have exposed the limited potential of human curiosity vis-a-vis targeted learning. In fact, even if we don’t like to describe AI learning in terms of curiosity, it is clear that AI is increasingly a substitute for tasks that once required a great deal of human curiosity. Consider the curiosity that went into automobile safety innovation, for example. Remember automobile crash tests? Thanks to the dramatic increase in computing power, a car crash can now be simulated by a computer. In the past, innovative ideas required curiosity, followed by design and testing in a lab. Today, computers can assist curiosity efforts by searching for design optimizations on their own. With this intelligent design process, the computer owns the entire life cycle of idea creation, testing, and validation. The final designs, if given enough flexibility, can often surpass what’s humanly possible.
Similar AI design processes are becoming more common across many different industries. Google has used it to optimize cooling efficiency with its data centers. NASA engineers have used it to improve antennae quality for maximum sensitivity. With AI, the process of design-test-feedback can happen in milliseconds instead of weeks. In the future, the tunable design parameters and speed will only increase, thus broadening our possible applications for human-inspired design.
A more familiar example might be the face-to-face interview, since nearly every working adult has had to endure one. Improving the quality of hires is a constant goal for companies, but how do you do it? A human recruiter’s curiosity could inspire them to vary future interviews by question or duration. In this case, the process for testing new questions and grading criteria is limited by the number of candidates and observations. In some cases, a company may lack the applicant volume to do any meaningful studies to perfect its interview process. But machine learning can be applied directly to recorded video interviews, and the learning-feedback process can be tested in seconds. Candidates can be compared based on features related to speech and social behavior. Microcompetencies that matter — such as attention, friendliness, and achievement-based language — can be tested and validated from video, audio, and language in minutes, while controlling for irrelevant variables and eliminating the effects of unconscious (and conscious) biases. In contrast, human interviewers are often not curious enough to ask candidates important questions — or they are curious about the wrong things, so they end up paying attention to irrelevant factors and making unfair decisions.
Lastly, consider a human playing a computer game. Many games start out with repeated trial and error, so humans must attempt new things and innovate to succeed in the game: “If I try this, then what? What if I go here?” Early versions of game robots were not very capable because they were using the full game state information; they knew where their human rivals were and what they were doing. But since 2015 something new has happened: Computers can beat us on equal grounds, without any game state information, thanks to deep learning. Both humans and the computers can make real-time decisions about their next move. As an example, see this video of a deep network learning to play the game Super Mario World.
From the above examples, it may seem that computers have surpassed humans when it comes to specific (task-related) curiosity. It is clear that computers can constantly learn and test ideas faster than we can, so long as they have a clear set of instructions and a clearly defined goal. However, computers still lack the ability to venture into new problem domains and connect analogous problems, perhaps because of their inability to relate unrelated experiences. For instance, the hiring algorithms can’t play checkers, and the car design algorithms can’t play computer games. In short, when it comes to performance, AI will have an edge over humans in a growing number of tasks, but the capacity to remain capriciously curious about anything, including random things, and pursue one’s interest with passion may remain exclusively human.
Source: Harvard Business Review