What is Weak AI

Weak AI, or Narrow AI, is a machine intelligence that is limited to a specific or narrow area. Weak Artificial Intelligence (AI) simulates human cognition and benefits mankind by automating time-consuming tasks and by analyzing data in ways that humans sometimes can’t.


Weak AI lacks human consciousness, though it may be able to simulate it. The classic illustration of weak AI is John Searle’s Chinese room thought experiment. This experiment says that a person outside a room may be able to have what appears to be a conversation in Chinese with a person inside a room who is given instructions on how to respond to conversations in Chinese. The person inside the room would appear to speak Chinese, but in reality, they couldn’t actually speak or understand a word of it absent the instructions they’re being fed. That's because the person is good at following instructions, not at speaking Chinese. They might appear to have Strong AI – machine intelligence equivalent to human intelligence – but they really only have Weak AI.

Narrow or weak AI systems do not have general intelligence; they have specific intelligence. An AI that is an expert at telling you how to drive from point A to point B is usually incapable of challenging you to a game of chess. And an AI that can pretend to speak Chinese with you probably cannot sweep your floors.

Weak AI helps turn big data into usable information by detecting patterns and making predictions. Examples include Facebook’s news feed, Amazon’s suggested purchases and Apple’s Siri, the iPhone technology that answers users’ spoken questions. Email spam filters are another example of Weak AI where a computer uses an algorithm to learn which messages are likely to be spam, then redirects them from the inbox to the spam folder.

Limitations of Weak AI

Problems with Weak AI besides its limited capabilities include the possibility to cause harm if a system fails­ – think of a driverless car that miscalculates the location of an oncoming vehicle and causes a deadly collision – and the possibility to cause harm if the system is used by someone who wishes to cause harm – such as a terrorist who uses a self-driving car to deploy explosives in a crowded area. Another issue with it is determining who is at fault for a malfunction or a design flaw.

A further concern is the loss of jobs caused by the automation of an increasing number of tasks. Will unemployment skyrocket or will society come up with new ways for humans to be economically productive? Though the prospect of a large percentage of workers losing their jobs may be terrifying, it is reasonable to expect that should this happen, new jobs will emerge that we can’t yet predict, as the use of AI becomes increasingly widespread.