The new world of work: You plus AI

0
273

Where is your company on the AI ​​introduction curve? Take our AI survey to find out.

New technologies meet both advocates and opposition as users weigh the potential benefits against the potential risks. To successfully implement new technology, we need to start small, in some simplified forms, and adapt a small number of use cases to provide a proof of concept before use is scaled up. Artificial intelligence is no exception, but with the added challenge of penetrating the cognitive sphere that has always been the prerogative of humans. Only a small group of specialists understand how this technology works – so more education is needed for the general public as AI becomes more and more integrated into society.

I recently met with Josh Feast, CEO and co-founder of Boston-based AI company Cogito, to discuss the role of AI in the new era of work. Here’s a look at our conversation.

Igor Ikonnikov: Artificial intelligence can be an incredibly powerful tool, as you know from your experience starting and growing an AI-based company. However, there are many people who have raised concerns about the impact on the workforce and whether this new technology will one day replace them. Let’s cover this topic first: are you concerned about AI coming for jobs?

Josh Fest: You are right, this question has been asked many times over the past few years. I believe the time has come to focus on how we can shape the AI ​​and human relationship to make sure we are happy with the outcome rather than facing an uncertain future. What I mean is that we live in a world where man and machine work together and will continue to work together. So instead of fighting technological progress, we have to embrace and use it. Our emotionality as people will always ensure that we remain an important resource in the workplace, even as companies use AI technology to revolutionize the modern enterprise. The idea is not to replace people, but to complement them with technology – or simply to help.

David De Cremer, chairman of Provost and professor at NUS Business School, and Garry Kasparov, chairman of the Human Rights Foundation and founder of the Renew Democracy Initiative, agree. They previously stated, “The question of whether AI will replace human workers assumes that AI and humans have the same qualities and abilities – but in reality they don’t. AI-based machines are fast, more precise and consistently rational, but they are not intuitive, emotional or culturally sensitive. ”By combining the strengths of AI and humans, we can be even more effective.

Ikonnikov: The past 15 months have been disruptive in many ways – including the surge in both the value of personal interactions and the need for a higher level of automation. Is that the opportunity to bundle your strengths?

Firmly: More than a year later, when remote working has become the norm for millions of people, almost everything we do is digitized and delivered through technology. We noticed improvements in efficiency and productivity, but also a growing need to fill the empathy deficit and increase energy and positive interactions. In other words, AI already works in symbiosis with humans. So it is up to us to define what this partnership should look like in the future. This consideration requires an open mind, active optimism, and empathy to realize the full potential of the human-AI relationship. I believe this is where people-conscious technology can play a huge role in shaping the future.

Ikonnikov: Can you elaborate on what people-conscious technology is?

Firmly: People-conscious technology has the ability to see what people need in the moment to better improve our innate abilities – including the ability to respond to and support our emotional and social intelligence. It opens new doors for technological expansion in new areas. One example of this today are “intelligent” prostheses based on human-machine interfaces that help limbs really feel like an extension of the body, like the robotic arm being developed at the Johns Hopkins Applied Physics Laboratory. The robotic arm has human-like reflexes and sensations and contains sensors that provide feedback on temperature and vibration and collect the data to mimic what human limbs can recognize. As a result, it reacts similarly to a normal arm.

The same concept applies to people who work on a large scale in a company – where a significant part of our work involves working with other people. Sometimes in these interactions we miss cues, get triggered, or don’t see another person’s perspective. Technology can support us as an objective “recognizer” of patterns and indications.

Ikonnikov: As we continue to use this human-conscious AI, you said we need to find a balance between machine intelligence and human intelligence. How can this be transferred to the workplace?

Firmly: In order to find and optimize this balance in order to successfully cope with the challenges in the workplace, several levers have to be pulled.

In order to empower AI to help us, we need to make AI active and thoughtful – the more we do this, the more helpful it will be to individuals and organizations. In fact, a team at Microsoft’s Human Understanding and Empathy group believes that “with the right training, AI can better understand its users, communicate with them more effectively, and improve their interactions with technology.” We can train technology through similar processes that we use to train people – by rewarding them for achieving external goals like getting a task done on time, but also for achieving our internal goals like maximizing our satisfaction, also known as extrinsic and intrinsic rewards. By providing AI data on what works intrinsically for us, we increase their ability to support us.

Ikonnikov: What would the result look like if the workplace evolves and AI becomes more and more anchored in our daily work processes?

Firmly: Success at work is increased when companies use people in combination with AI to create an enhanced experience at the crucial moments. It is these interactions in the moment that the new wave of possibilities arises.

In a personal conversation, for example, both participants initiate, recognize, interpret and react to the social signals of the other, what some call a conversation dance. Over the past year we all had to communicate via video and voice calls and challenge the nature of this conversational dance. In the absence of other communication methods such as eye contact, body language, and shared personal experiences, voice (and now video) is the only way a team member or manager can express emotions in a conversation. Whether it is a conversation between employee and customer or employee and supervisor, these are decisive moments for a company. Human-conscious AI, trained by humans in the same way that we train ourselves, can expand our capabilities in these scenarios by helping us when it matters and getting better results.

Ikonnikov: There has been a big shift in the AI ​​talks lately when it comes to regulation. The European Union, for example, has put forward a tough proposal to regulate the use of AI, a first of its kind. Do you think that AI needs to be better regulated?

Firmly: Together we have an obligation to develop technologies that are effective and fair for everyone – we are not here to build everything that can be built without restrictions or restrictions when it comes to basic human rights. That means we have a responsibility to regulate AI.

The first step to successful AI regulation is data regulation. Data is a central resource that defines the creation and delivery of AI. We are already seeing unintended consequences of unregulated AI. For example, there is no level playing field for all companies when it comes to deploying AI as it varies greatly from company to company based on the amount and quality of data they have. This imbalance will affect the development of technology, economics, and more. As market leaders and brands, we need to work actively with regulators to create common parameters to level the playing field and build trust in AI.

Ikonnikov: How can AI technology developers gain this trust?

Firmly: We need to focus on implementing ethical AI by bringing transparency to the technology and giving clear benefits to all users. This also extends to the provision of education and training opportunities. We also need to actively mitigate the underlying distortions in the models and systems used. AI leaders and creators need to conduct extensive research into bias reduction approaches, such as examining gender and racial bias. This is an important step in building trust in AI and responsibly implementing the technology in organizations and populations.

We also need to ensure that AI creators, who are diverse themselves – with different demographics, immigrant status and backgrounds – are given a chance. It is the creators who define what problems we address with AI, and more diverse creators will lead AI to address a wider range of problems.

Without these parameters – without trust – we cannot fully exploit all the advantages of AI. On the other hand, if we get this right and, as AI creators and executives of related organizations, do the work to gain trust and make AI thoughtful, the result will be responsible AI that really works in symbiosis with us and is more effective supports us in how we shape the future of work.

VentureBeat

VentureBeat’s mission is to be a digital marketplace for tech decision makers to gain knowledge of transformative technologies and transactions. Our website provides essential information on data technologies and strategies to help you run your organization. We invite you to become a member of our community to gain access:

  • current information on the topics of interest to you
  • our newsletters
  • closed thought leadership content and discounted access to our award-winning events such as Transform 2021: Learn more
  • Network functions and more

become a member