First AI Wave: Handcrafted KnowledgeThe first wave of AI saw experts devise algorithms and software based on their own knowledge. Think chess-playing computers, and delivery optimization software. This type of AI forms the basis for some AI used today, such as in smartphone maps software and traffic lights that allow people to cross the street by pressing a button. Systems based on the first wave of AI are usually based on clear and logical rules; while able to perform straightforward reasoning tasks, these systems can do little more than form conclusions by examining important parameters of the situations it needs to solve. Humans are needed to identify the parameters for such situations and this makes such AI systems incompatible with new situations that have not been pre-programmed by humans. AI systems are best at implementing simple logical rules for well-defined problems. They are incapable of learning, and have a hard time dealing with uncertainty.
Second AI Wave: Statistical LearningAt DARPA’s first grand challenge in 2004, fifteen autonomous vehicles competed to finish a 150 mile course in the Mojave desert. The vehicles, which relied on first wave AI, highlighted its limitations. The Grand Challenge deputy program manager said the vehicles “were scared of their own shadow, hallucinating obstacles when they were not there.” By the end of the day, only the course remained undefeated. In Grand Challenge 2005, five groups made it to the end of the track, all of whom had relied on the second wave of AI known as statistical learning. The head of the vehicle called Stanley was right away hired by Google. In second wave AI systems, the engineers and programmers develop statistical models for certain types of problems, and then ‘train’ these models on many various samples to make them more precise and efficient. Statistical learning systems can distinguish between two different people or between different vowels. They can learn and adapt themselves to different situations if they’re properly trained. For instance, artificial neural networks data goes through computational layers, each of which processes the data in a different way and transmits it to the next level. By training each of these layers, as well as the complete network, they can be shaped into producing the most accurate results. Second wave systems are good at face recognition, speech transcription, and at identifying animals and objects in photos, and control autonomous cars and aerial drones. Microsoft’s social media bot, Tay, was based on this type of AI technology. Designed to replicate the speech patterns of a 19 years old American female youth and talk with teens in their unique slang, the youth began pranking Tay. They educated her that Hitler was great and successful. She learned well: Microsoft’s engineers took Tay down. Perhaps Tay will be back one day after Microsoft’s engineers re-educate her.
Third AI Wave: Contextual AdaptationThird wave AI systems themselves will discover on their own the logical rules which shape their decision-making process. When a third wave system ascertains a situation, it will probably explain some of it logic. DARPA takes the example of recognizing a cow. While a second wave system can say that there is a percentage-chance that something is a cow, a third wave system can explain why that is. “There’s a whole lot of work to be done to be able to build these systems” said DARPA’s Information Innovation Office. When third wave AI systems will be able to decipher new models that will improve their function, sans human input, they will theoretically be able to program new generations of software. AIs with a complex understanding of context and the consequences of actions could replace most human workers, possibly all of them in time. And when allowed to reshape the models via which they appraise the world, then they’ll actually be able to reprogram their own motivation.
Can crowdsourcing herald in a new era of AI?One study estimates that between 2012 and 2017, the market for crowdsourcing in the US grew by about 37%. Its total value was estimated at $6.5 billion. This is the approach taken by blockchain-based Mind AI, whose operation depends in part on ontology models generated from crowdsourced data. Feeding an AI model with data generated from a large and diverse population could accelerate development quite radically. Mind AI combines a core AI engine with an ontology database designed to store vast datasets. These knowledge bases are distributed — hence blockchain — which simultaneously fixes issues surrounding data storage and the need to use a supercomputer for processing. Similar to this is Endor Protocol, whose AI functions using predictive analytics. It achieves this by making artificial intelligence predictions accessible to all sizes of business, allowing the user to ask a predictive question and receive a direct answer. Its methodology centers around science and data analytics — providing automated and accurate AI predictions — with predictive insights based on analysis of contributed data.
The future could come quicker than expectedThe more quality knowledge the better, however, and an AI model with access to the most is likely to develop at a faster pace. Crowdsourcing data could be an effective way to usher in the third wave of AI and herald a new era of technology. Mind AI has an intriguing proposition to this end; it is a blockchain-based platform, which means micropayments can be made in crypto tokens and should incentivize a wide network of participants choosing to contribute knowledge. One thing is for sure: when the gears are in motion on third wave AI, they will move fast.
Join Our Telegram Channel or Follow @CaptainAltcoin