Deep Learning

The field of artificial intelligence is essentially when machines can do tasks that typically require human intelligence. It encompasses machine learning, where machines can learn by experience and acquire skills without human involvement. Deep learning is a subset of machine learning where artificial neural networks, algorithms inspired by the human brain, learn from large amounts of data. Similarly, to how we learn from experience, the deep learning algorithm would perform a task repeatedly, each time tweaking it a little to improve the outcome.

Practical examples of deep learning:

  • Virtual assistants – Be it Alexa, Cortana or even Siri, the virtual assistants of online service providers use deep learning to help understand your speech and the language humans use when they interact with them.
  • Translations – Deep learning algorithms can automatically translate between languages. This can be powerful for travellers, business people and those in government.
  • Autonomous driving – The more data the algorithms receive, the better they can act human-like in their information processing – e.g. knowing a faded stop sign or be it covered with snow/ dirt is still a stop sign.
  • Chatbots – Companies can respond in an intelligent and helpful way to an increasing amount of auditory and text questions thanks to deep learning.
  • Pharmaceutical – Many pharmaceutical and medical companies are now using deep learning to assist them to discover pharmaceutical drugs better than existing methods.
  • Personalized shopping and entertainment – Wonder how Amazon suggest what you should buy next or how Netflix suggesting on what to watch next? That’s because its deep learning algorithms at work.

 

 

Machine Vision

Machine vision is a form of artificial intelligence where machines can “perceive” the world, analyze visual data and then make decisions from it or gain understanding about the environment and situation. One of the driving factors behind the growth of machine vision is the amount of data we generate today that is then used to train and make machine vision better. Our world has countless images and videos from the built-in cameras of our mobile devices alone. But while images can include photos and videos, it can also mean data from thermal or infrared sensors and other sources. As the field of machine vision has grown with new hardware and algorithms so has the accuracy rates for object identification.

Practical examples of machine vision:

  • Autonomous vehicles – Machine vision is necessary to enable self-driving cars. Manufacturers such as Tesla, BMW, Volvo, and Audi use multiple cameras, lidar, radar, and ultrasonic sensors to acquire images from the environment so that their self-driving cars can detect objects, lane markings, signs and traffic signals to safely drive.
  • Facial recognition – Capable of identifying or verifying a person from a digital image or a video frame from a video source, this technology is used for policing, on payment portals, at security checkpoints and many other applications.
  • Healthcare – Almost all medical data is image based leaving a plethora of uses for machine vision in medicine. From enabling new medical diagnostic methods to analysing X-rays, mammography and other scans to monitoring patients to identify problems earlier and assist with surgery, it is expected that our medical institutions and professionals and patients will benefit from machine vision today and even more in the future as its rolled out in healthcare.
  • Real-time sports tracking – Tracking balls movement on televised sports has been common for a while now, but machine vision is also helping play and strategy analysis, player performance and ratings, as well as to track the brand sponsorship visibility in sports broadcasts. · Manufacturing – Machine vision is helping manufacturers run more safely, intelligently and effectively in a variety of ways. Predictive maintenance is just one example where equipment is monitored with machine vision to intervene before a breakdown would cause expensive downtime. Packaging and product quality are monitored, and defective products are also reduced with machine vision.

 

 

Natural Language Understanding

Natural language understanding (NLU) is a branch of artificial intelligence that uses computer software to understand input made in the form of sentences in text or speech format. NLU uses algorithms to reduce human speech into a structured ontology. AI fishes out such things as intent, timing, locations and sentiments. For example, a request for a camping trip somewhere in the state of Pahang on the 6th of September might break down something like this: Bus tickets [intent] / need: camping lot reservation [intent] / Taman Negara [location] / September 6th [date].

NLU is tasked with communicating with untrained individuals and understanding their intent, meaning that NLU goes beyond understanding words and interprets meaning. NLU is even programmed with the ability to understand meaning despite common human errors like mispronunciations or transposed letters or words.