What we do
Matt Armstrong-Barnes: AI is a journey, not a destination. Once you start on that journey, your competitors will never close the gap because AI gives you that edge.
Craig Lodzinski: The fundamental underpinnings of everything we're doing in AI and research and all this weirdly hinges on a very small amount of incredibly complex maths. It's a bit of a journey into the unknown.
Michael Bird: Hello and welcome to Explain IT, brought to you by Softcat. The show for IT professionals by IT professionals that aims to simplify the complex and often over complicated bits of Enterprise IT without compromising on detail. I'm host Michael Bird and over the next 30 or so minutes I'll be challenging our panel of experts to take a different area of the IT ecosystem and, of course, ‘Explain IT’. In this episode we going to be taking a look at AI and machine learning and with me to help is Craig Lodzinski who is Softcat’s Chief Technologist for emerging technologies. So Craig is it true that you have been confused for Kit Harington by a stag do in Prague?
Craig Lodzinski: Yes absolutely. So my wife and a few mates of ours, we went to a gig in Prague and afterwards we were sat in the town square there, and then what I can only describe as a horde of about 40 German guys on a stag do, proceeded to start yell-singing the theme tune to Game of Thrones from across the square and then proceeded to march over and take photos.
Michael Bird: You do look pretty Kit Harington-esque at the moment. And we've also got Matt Armstrong-Barnes who is HPE’s UK&I Chief Technologist for artificial intelligence. Matt, we were chatting earlier and you were saying you are huge film and TV enthusiast. Can I ask what is your favourite film?
Matt Armstrong-Barnes: I get asked this a lot. Normally I dodge the question a little bit and talk about my favourite film based on genre. It has to be said, being a big film fan, what I do like is films that have had a major impact on me as I've been growing up and probably the biggest one, I have to say, is The Matrix. Amazing movie, such a change in terms of the way the cinematography was done and really, sort of, drove the film industry forward.
Michael Bird: So first question - what is AI and is it the same as machine learning?
Matt Armstrong-Barnes: So just to give a little bit of history on how we've got to AI. AI, as a mathematical concept, was originally created in the late 1940s. It was looking at missile trajectories - obviously there was a specific focus as to why we looked at that. And then it was Alan Turing in the 1950s and then a subsequent working group in the later 1950s and that's actually when the term ‘artificial intelligence’ was coined. So the mathematics around AI has been around for a long period of time and it wasn't until 2012 when there was a big breakthrough that drove adoption from core academia into the mainstream. And that came when an artificial neural network was created called AlexNet. Cunningly created by a guy called Alex, and what he did was he built a model that recognised images and he wanted to enter it into a competition called ImageNet. And associated with going through that he worked out that it would take a term to run the mathematics associated with the image processing and he also worked out that he needed to run it about 50 times. So he went and spoke to his all his lecturers and they said ‘this is great you'll be ready in 50 terms then’, and AlexNet, the ImageNet competition was going to be in 12 months’ time. So Alex was a gamer, so he sat down with a bunch of his gamer mates over a weekend and they rewrote, or they ported their model from running on a CPU – a central processing unit - to running on a GPU, a graphical processing unit. And by going through that it gave it a roughly 10 fold increase in performance and it went down from running in a term to running in a week. So he ran his model the 50 times he needed, he entered it into ImageNet and he won the competition and not only did he win it, he won it significantly. And that evolution in computing hardware combined with the maths that's been around for a very long period of time was a key accelerator in terms of pushing AI into the mainstream. So if we start to think about… I’ve used the term, ‘AI’ and I’ve used a couple of other terms in there as well, what that means. So if we think about artificial intelligence at the top level, there are three sublevels inside the overarching concept of artificial intelligence. So artificial intelligence is a computer that is capable of operating in the same way that a human being does. We're not there yet, that's the stuff of science fiction. There's quite a lot of work happening to drive us towards that - but we're not there yet. Inside that, the next sub-level down is machine learning and what machine learning is, is the capability of a computer to operate across a complex set of information without being explicitly programmed to draw analysis from that information and in essence it means that it can learn, instead of being programmed. So the best way of doing that, so the next sub-level, is artificial neural networks. There are lots of ways of achieving an artificial neural network, which is fundamentally a representation of the way that a biological brain works. And the most successful, so the final level in that representation of a way a human brain works is deep learning, which is a mathematical representation of the way a biological brain works. So if I apply that from the bottom all the way to the top, that means we use a mathematical model to create an artificial neural network that gives a machine the capability of learning instead of being programmed which is a way of achieving a subset of artificial intelligence.
Craig Lodzinski: In terms of the differentiation between AI and machine learning and deep learning and neural networks, it becomes a little bit confused because we have a confluence of academia and mathematics and the principles of AI, and as Matt said, it's been something that's been a field of study ever since the post-war era and we've kind of just been waiting for technology to catch up so we can do everything that's been prophesied about and we still are, in terms of general AI. But then now AI has become more prevalent and the techniques are starting to filter down, we’re reaching almost a marketing problem wherein AI is a very sexy concept, the idea of an AI is very cool, but because of that everything is being tarred with the AI brush, it's right at the top of the hype cycle, to use a bit of a Gartner-ism, so actually a lot of things are being bundled in with AI when they’re just a big old bucket of Bayesian statistics that have been stirred with a stick and thrown out and gone, ‘hey this is AI!’ but that's kind of, not really. The definition, in a strict sense of what we classify as AI and the different steps on that road map are very different from an academic and a research perspective than what the market looks at.
Michael Bird: So what can an organisation do to take advantage of AI and machine learning today? Or is it just something that only companies with lots of technical expertise are able to use and take advantage of?
Matt Armstrong-Barnes: The good thing that's happened with AI living in the academic world for a very long period of time is lots and lots of frameworks have been built. So these are ways that you don't have to understand all the maths, you can actually, if you want to go down the coding route, you can build an AI with a very small amount of code using all of the frameworks that you've got. So my view is, AI is a tool that should be in every organisation's toolkit. You do need to think about some of the things around the data because you need to understand the data that you're using. What I do find with a lot of organisations is AI has become an interesting science project. So think about what you want to achieve and where you really want to play AI in is the where the rules you need to process the data are either too complex to define, too costly to define or too complex or costly to maintain. That's a perfect place for AI to play a role.
Michael Bird: Have we got any stories of organisations using this today for some really interesting or practical tasks?
Matt Armstrong-Barnes: Without going into any specifics, there is a recent IDC report that talked about AI spend and in this financial year alone it’s going to top 19 billion dollars. It's been driven by retail, so retail organisation spending quite significantly and what they're looking at is product recommendations. We’re also seeing a significant spend in financial services, where they’re looking at complex fraud detection. We start thinking about manufacturing, quality assurance – HPE, we’re a big manufacturer and we actually use both AI and blockchain as part of our manufacturing process. I don't know if you’ve ever seen a sheet of aluminium, it's not the most interesting thing and trying to spot a problem on a sheet of aluminium is actually very technical. Whereas what you can do, is you can teach a deep learning algorithm to spot the difference between a defect in a sheet of aluminium and not a defect in a sheet of aluminium, or whether or not a circuit board has all of the relevant componentry plugged in correctly. And last of all in healthcare, and this is associated with predicting people who may well have problems in the future, based on their medical history and a corpus of all medical knowledge. Also in terms of how you want to prioritise treatments and care pathways.
Michael Bird: Does big data flow into that? I know it's a bit of a buzzword, but for healthcare are you plugging in loads and loads of data points across a huge cohort of people and then spitting out… OK we think these five people are going to have these issues because of patterns that we've seen amongst the rest of the cohort of data?
Matt Armstrong-Barnes: Yeah definitely. Obviously data needs to be handled very carefully. The care record guarantee in the UK covers some of the things that can be done with your health record because it is personalised. So assuming you've ticked all the boxes to say you want to participate in these trials, etc, what we can do is look at extremely complex information. Also you can do some great stuff around modelling. So you can start to look at things like digital twin and AI so you can start modelling things that are in the digital world and start to run some capability on that as well.
Craig Lodzinski: In terms of Enterprise applicability, obviously it depends on the individual operating environment. So, Matt was saying about the recommendation engines - I think this is probably back in 2015, 2016, but one of the chiefs at Netflix said that their recommendation engine was worth over a billion dollars per annum because it encourages people to use their service rather than others and it also aids in customer retention and it's really part of their core value proposition as a ‘born in the cloud’ agile company AI is incredibly useful to them. And it does span across a lot of different organisations in terms of the academic side of things, that's where it started out with and we're still seeing great research, great applicability within academia. But one of the real benefits is that we now live in this kind of open source world, there's so much information out there, as Matt was saying, frameworks that are out there. For example there's a CNN-RNN framework, so that’s a combination of a convolutional neural network and a recurrent neural network. You have all these frameworks pre-set up and you can go and use that or you can go even further forward and start consuming AI as a service. Now that can be integrated into certain products, so we’re seeing pretty much every storage vendor now having some sort of AI support service, so using artificial intelligence and learning techniques to take the data off the storage array process there and provide predictive failure analysis, better support, better phone home, which not only helps the customer but reduces the support workload on the back-end, because a lot of this stuff will be resolved automatically. In addition to that you look at security products, so Cylance who have just been sold for a billion dollars to BlackBerry and all the other vendors in that space are using these types of techniques integrating them to their products because spotting pattern, spotting anomalies - that's part and parcel of cybersecurity. And I think this ‘buy versus build’ model is going to become much more prevalent - it's an important decision for companies to make. Obviously the more you build your own, the more unique IP you’re building, there’s potential for an improved value chain there, but actually if that's not inside your core competency, your core set, you just need to look at where you sit on that spectrum between how much effort you want to put in and how much you want to get out of it and that could be purchasing an integrated product, using a prebuilt platform - whether that's from a software platform from one of the hyperscale cloud providers, from companies like HPE who can help with that, or going further down along that spectrum, it really depends on the specific use case that you're looking to apply AI technologies to.
Matt Armstrong-Barnes: The best ways of thinking about AI is, there’s really three models. Buy one that someone else has done, there's a build one yourself or there is a hybrid model between the two.
Michael Bird: So is there anything that an organisation or user would need to worry about? I guess things like bots? So things like that and I guess as well, some of the predictive stuff where humans maybe aren't going to be involved as much. Is that not just going to make everything really impersonal?
Craig Lodzinski: One of the things that’s hit the press in terms of AI in 2018 was when Google demo’d their Duplex voice product. Which is an AI system where it’s designed for integrating with small businesses where they might not have a website or social media contact details and being the socially awkward millennials and Gen Z individuals that we are using these technologies, it makes phone calls on your behalf seamlessly in the background, pretending to be human. And AIs pretending to be human is a really interesting topic because naturally, machines that are doing maths don't need to speak in natural language. They’re used often to understand natural language, but actually it's a very inefficient form of communication to act like humans if you’re two machines. Two machines will never need or want to talk to each other in a human fashion unless they're trying to integrate with other humans or pretend to be other humans and there's a certain uncanny valley in the Google Duplex demo was just, I would argue, on the wrong side of that. It felt a little bit disingenuous because it was doing all the ‘mms’ and ‘ahh’ and pausing and very natural-like speech. And it feels that, unless there's a disclosure that you're speaking to Google Duplex bot, that becomes a little bit strange. But actually I think there's also real potential for increased personalisation. So you looking at the bank teller problem - the shift to online banking and mobile banking has dramatically reduced the load on cashiers and tellers so that means there's less use case for high street banking, but actually what's happened is that the number of bank tellers has actually gone up because they are now dealing with, usually business customers, high net worth customers or people that need particular care and attention. And the quality of service within banks has improved because a lot of the mundane tasks - people paying in and withdrawing - has been automated out. And certainly if you can use that filtering, so if you can use AI to filter out and allow humans to do great human stuff, because we're still nowhere near a general purpose AI, a Google Duplex bot can do a very small amount of human-like interaction, but it can't do a lot of human things.
Matt Armstrong-Barnes: The one challenge that we see is regulation, so chatting to some of the banking and financial services customers that we've got who are looking at this kind of technology, but they're not really using it. And the reason is you can't put a machine in prison. Well I suppose you could, but it’s not going to get much value out of sticking it behind bars and that's because of accountability to the regulator. So in a lot of industries that are regulated, you have to be able to explain how you’ve arrived at a decision and you also have to be able to demonstrate that you've done so in the right way.
Craig Lodzinski: AI is incredibly interesting. It's a little bit probably ,biased in this room with me and Matt talking about it because it's our bag, but certainly it's very interesting. There's a lot of potential in the air and it's very fun and cool and sexy, unfortunately actually underneath, particularly when we look at organisational enterprise IT, it's the same boring projects. As Matt alluded to it's still a project, it’s a corporate IT project, it's the same as anything else. And yes, there’s cool moonshot stuff you can do, and if you’ve got great researchers, great data scientists, give them a public cloud account, bang out some TensorFlow, some Python one afternoon a week and see what happens, for sure. But if your product heart is in this, particularly if you're putting the crown jewels, the core data of your organisation through it, it's the same old principles, it’s enterprise architecture, it's the standard TOGAF principles, you’ve got your data and your application and nowhere is the volume and significance of data and application more critical than within AI and AI projects and it's the same old 12-step programs, it's still project management, enterprise architecture, technical architecture, the same old model we’ve been dealing with for a long time, even though this is a very new and exciting field for a lot of organisations, the fundamental principles really do remain the same.
Matt Armstrong-Barnes: AI is a tool and it should be in your toolkit. But there are a lot of things in your toolkit so you need to make sure that when you're tackling a problem, that you've got the right tool for what you need to do.
Michael Bird: Do you think an organisation, if they're not taking advantage of some sort of AI in 2019, do you think they are missing out? Do you think they are going to be falling behind the curve?
Matt Armstrong-Barnes: There's quite a lot of research done by the analysts and what they’re saying is, AI, whether or not you buy it pre done, whether or not you build it yourself or go through that partial model that someone else has built, will give you a differentiator over your competitor, however big that gap is. AI is a journey not a destination. Once you start on that journey, your competitors will never close the gap. So however big the gap is, however long you've started in comparison to your competitors, if you keep on the AI journey, you will always be ahead, because AI gives you that edge. So the answer to the question is, AI is mainstream now so it is a case of adopting it to help you with that market differentiation.
Craig Lodzinski: I think there's a lot of technologies out there, it's not the be all and end all and we're not saying that all organisations need to embrace AI to survive, but in the same way, you don't have to use any cloud or public cloud or cloud-like technologies, and you can still run an organisation. You don't need to use Microsoft software, you can run completely without Microsoft or completely without Open-source, take your pick. But it is a tool in the tool bag and even when used in the right way, it's a huge source of competitive advantage. So if organisations aren’t looking at their data environment, AI technologies and the potential of that or at least assessing that, not necessarily deploying right now, but it's got to be on the radar because there's huge implications for the use of technology in all organisations and how you use that to drive, thrive and survive.
Michael Bird: And so what about the future?
Matt Armstrong-Barnes: So if I talk about something that we, at HPE have done very recently... So what we did was looked at the games that AIs have beaten. So it's beaten in checkers, it's beaten chess, it's beaten Go, which apparently is the most complex game that there is. So what we did was, we looked at, right ok what's the next evolution? And the next evolution was poker. And we did this a couple of years ago, we did it with a major university in the US. And we actually took our algorithm that we’d built and we challenged the world's top four poker players and yes, we lost spectacularly, I think the figure was $700,000, oops! But we then approached those guys and said, “Why did we lose?” and they said, “Your algorithm just can’t bluff.” So we then took it away with this famous university and spent a year training the algorithm to bluff. We took it back again to the top four poker players in the world and we won 1.7 million dollars. So with that technology, if you think about the application of it, what is the difference between poker and playing Go? You actually don't know the cards the other person has, so what we can do with that type of technology is, if you think about negotiations that are happening or decisions that need to be made that are complex where you don't understand what the other person has. So we’re starting to get into this realm where some of the advice that you can get from an AI is not predicated on information that is known, it's predicated on information that is unknown.
Michael Bird: That's kind of the short term then, let's look at the medium term, what's the medium term for AI?
Matt Armstrong-Barnes: So if we start to think about medium term as the five to ten year horizon. And really this is kind of evolution of things that are happening now. Let's think of the macro terms - world hunger and world poverty and fuel problems. So if we think about world hunger; AI today is being used to reduce herbicides by 80%. And what's it doing? It's recognising what weeds look like. So instead of spraying crops, this is a small pilot that's running, it’s using AI to specifically target herbicide injection into plants. If you start to think about fixing the energy problems, AI is being used to spot patterns for cold fusion so it's starting to drive forward some of the energy challenges that we've got. And if we start to look at longevity of people, it's being used for complex or spotting complex chemical formulas for drug application. Those are things that are happening now. If we start to think about how those can be applied in the shorter term... Lots of us are all wearing trackers, in terms of our movement, etc. In the next evolution of these we can start to understand our blood chemistry on a very regular basis. We can start looking at our family history and as a result, AI can be very individual across my needs and could adjust vitamin intake on a daily basis or go beyond just, ‘Matt, you need to do a few more steps today’, it could start recommending things that I should do that would enhance, significantly, my life expectancy. And then if we look out into the future, so we at HPE, we’re working with NASA on a mission to Mars. Which is the plan to put a man on Mars by 2030. Somewhere up in space if you look up, the International Space Station is actually powered by HPE technology. And we’re actually going for an award at the minute which is having a data centre in the most hostile conditions - which are space. Where we’re seeing these sort of AI application on the mission to Mars is all of the rovers that are going to be deployed will be AI based. So they're building rovers. Obviously before you send a human being onto Mars you need to get some real telemetry around what's happening on the ground and these things are going into complete unknown. Defining a set of rules that would dictate how these rovers would operate is just impossible, so the only way of doing that, in terms of space exploration, is by deploying AI based rovers on to Mars. So that's the sort of 2030 horizon that we’ll start to see that happening.
Michael Bird: And how far are we off having a HAL?
Craig Lodzinski: That's a weird question and it's one of those horrible questions because the fundamental underpinnings of everything we’re doing in AI and research and all this, is weirdly hinged on a very small amount of incredibly complex maths that has been enabled by some fantastic developments in computing. It's one of those things whereby it may be a thousand years away that we come across some insurmountable problem and it takes an absolute genius to come up with the way of circumventing that or it could be that we simply don't have the computing power to generate it and we need a 500 qubit quantum computer to actually make it all work. It could be that actually we just crack it almost instantaneously like that as we were referring to earlier with AlexNet, that suddenly we have this huge leap forward. Quantum computing definitely has the potential to power that, but underneath we’ve still got the same mathematics, the same techniques and it's a bit of a journey into the unknown.
Michael Bird: So Craig, to summarise?
Craig Lodzinski: So to summarise, I think we've looked into the history of AI and the research dating back from the post-war area and through to modern day and where we are now. We’ve covered that, unfortunately it's not as sexy as it's made out to be. It is coming into the mainstream now, we're looking at AI being a pretty pervasive and common topic in enterprise IT and it needs to be treated with the same respect and same people and process decisions that we would any other technology stack from the standard end-user and data centre stuff all the way through to really new and emerging technologies. And we've discussed some interesting possibilities, obviously those are going to vary vastly between organisations. We touched on financial services and regulated industries and how that's going to be different and public sector and healthcare where there's broad applications of AI and certainly there’s going to be applications in almost every industry, but those are going to vary dramatically between industries and between individual organisations, depending on their organisational goals and the datasets and tools available to them. And hopefully we've started off making a bit of a start on building our very own HAL.
Michael Bird: Craig and Matt it's been really interesting talking to you both, thank you so much for your time. Listeners if there's anything in this show that has piqued your interest, or if you'd like to find out a bit more about what we've talked about in today's show, we’re going to be including lots of information in the show notes. We’ll also include some contact details if you'd like to speak to someone. Please do make sure you also click subscribe wherever you get your podcasts and we'll deliver this episode and future episodes directly to your phone or device as soon as it lands.
Episode 1: 2019 Tech Predictions
Episode 2: AI and Machine Learning
Episode 3: The Future of IT in Healthcare
Episode 4: Security Trends
Episode 5: 5G
Episode 6: Supply Chain Attacks
Episode 7: Rise of the machines
Episode 8: Unstructured Data
Episode 9: Quantum Computing
Episode 10: Multi-cloud
Episode 11: The Future of Work and Workplace
Episode 12: Endpoint Detection and Response
Episode 13: IT Misconceptions