Big Data, Artificial intelligence, Machine Learning and Computer Vision
Turns out that Computer Vision requires Machine Learning, and Machine Learning requires Artificial intelligence. Artificial intelligence is of most use when there is large amounts of data to process. Hence in a way, AI relies on big-data (though not always). The cognoscenti use the abbreviations CV, ML, AI and err…. “big data” to refer to these technologies.
Some simple definitions
These are what are called HORIZONTAL TECHNOLOGIES, because they are general and can be applied to a range of problems across industries. They are already having an impact in some areas but they are not a panacea.
Big Data relates to the collection and storage of large quantities of data and being able to access and manipulate this quickly – using high speed networks, special search algorithms, parallel processing etc. It gives rise to a whole world of shared resources, cloud computing and specialised storage schemes. When real-time information is included (such as sensor data) then this is the world of IoT, time-series data and edge processing.
Ai is a way to analyse the information contained the big-data very quickly. Creating inferences between data signals and looking for patterns, sometimes used to predict outcomes and reduce uncertainty. AI is very narrow in its applicability, even Bill Gates says you wouldn’t trust it to order your inbox for you, so it’s ability to make judgements is limited. A lot of what we talk about being AI is a form of linear regression, mass computational power enabling the quick processing of data and crunching of large amounts of data. AI can give the illusion of being smart, when in fact it can be easily fooled.
Machine Learning is the ability for an algorithm/analysis to change over time by examining a changing stream of input information and comparing computed outcome with desired outcome and tuning. Neural networks. Machine learning is an application of AI.
Computer vision is an application of both AI and ML which is used to process images (still and moving), one of the most controversial applications of this is with facial recognition and the automatic tracking of people.
Some of the take-aways from London Tech-week
Everyone that spoke (and I mean everyone) – said that their biggest issue of applying any form of advanced analysis fell down on the quality of the data. The meaning of information collected and the way it is labelled is so inconsistent. There were some semantics companies working with different ways of expressing ideas in language which may hold a key to explaining the differences between the labelling of data items. Until this takes off then 75%-90% of your AI budget is going to be spent cleaning up data and sorting out the meaning of feeds. Trouble is you will spend this 90% of your budget with no tangible change in outcome as you can’t get started until it’s sorted out.
I saw what I thought was a brilliant example of this from a Swedish company called Spacemaker (https://spacemaker.ai/ ). This company works alongside architects to help them choose between the complex trade-offs required when selecting the layout of buildings. Trade-offs between natural light, housing density, noise exposure, energy efficiency etc. By providing optimisation inputs the computer very quickly generates possible layouts which works alongside the architect freeing them from the mundane, but complex, calculations and predictions of weather patterns, seasons etc. The result is much better buildings but not taking away the artistic judgements of the architects.
In Oil and Gas I can see a similar “advisor” system working alongside production engineers, economic planners, maintenance engineers, planners and schedulers helping to provide scenarios based on optimisation parameters enabling them to choose the best configuration to implement.
I saw an example from a company called Dark Trace (https://www.darktrace.com/en/ ). As well as being a simply brilliant commercial success with enough financing to direct sufficient money dedicated to PR, marketing, sales and distribution (as well as well researched and implemented tech). Said quickly their system sits at the network hubs in your organisation and reads each packet of information (sometimes understanding the content, but often only it’s source and destination). It uses ML to work out what normal looks like for you (and evolves), and if it sees something abnormal start it can raise questions. It can also take action by IP spoofing to intercept traffic and block comms. One of their success stories is the NHS trusts that installed them and contained the WannaCry attack (https://www.bbc.co.uk/news/health-43795001 )
This might be applicable to Oil and Gas by monitoring all the signals in the real-time stack and learning what normal operation looks like and then being able to spot abnormalities as they start to occur. It would be more complex because the relationships are more complex than network traffic but still, got to be worth a shot.
I found this very interesting. I spoke to a company Winnow which has applied computer vision to the problem of cutting food waste from restaurants. (https://venturebeat.com/2019/03/21/winnow-uses-computer-vision-to-help-commercial-kitchens-cut-food-waste/ ). By using a camera it captures images of what the chef is throwing away and through a series of algorithms works out the amount and value of the waste. For instance, perhaps a chef orders the same amount of broccoli every day, but really only uses most of it on Fridays, Wednesdays and Saturdays. Or maybe the ordering changes with the weather – but either way, if you can reduce the over ordering then everyone wins.
People are already applying technology like this in industrial settings to check that people use their safety equipment (like glasses, harnesses etc.), you could also start to put cameras covering manual controls to create records of change and current settings – it would be a cheap way to retro-fit instrumentation.
That’s all for now but some of the other things I found out about included “The Ostrich Problem”, the real-world applications of AR/VR, more on cyber security and what 5G will really mean. Other things that are hot right now are Commercial adoption, future of work, small-scale adaptive robotics, AI-Ethics and Decarbonisation. More of this later.