What we do
You're currently viewing Softcat.ie, would you like to continue?Yes, I want to view Softcat.ie
Michael Bird: Hello and welcome to Explain IT brought to you by Softcat, the show for IT professionals, by IT professionals that aims to simplify the complex and often overcomplicated bits of Enterprise IT without compromising on detail. I'm host Michael Bird and over the next 40 or so minutes I'll be challenging our panel of experts to take a different area of the IT ecosystem and of course, explain it, and in this episode, the final episode of Explain IT season two, I've asked our panel to bring with them some common IT misconceptions that they come across speaking to customers, partners and suppliers. We’re going to be talking through these statements to understand why they're important, why organisations should be aware of them and some tips to help avoid falling into any traps. As always I'm joined by a panel of experts who will be bringing an interesting fact to help you to get to know them a little bit better. So first up we have Adam Louca who is Softcat’s chief technologist for security - Adam what is your interesting fact?
Adam Louca: So I'm currently learning to paraglide. So I went paragliding in Peru, where you jump off the hill and you fly in to the air, which was pretty amazing, so I decided that I'm going to come back and learn how to do that in the UK. So my first lesson is actually Saturday so if you don't hear from me after this, or in series three, actually it's probably because I didn't make it.
Michael Bird: We also have Craig Lodzinski, who is Softcat’s chief technologist for data and emerging technologies. Craig what is your interesting fact?
Craig Lodzinski: I was once blockaded in a hotel room by a giraffe.
Michael Bird: Ok, explain that one please?
Craig Lodzinksi: So on the Softcat incentive quite a few years ago to Kenya, one of the hotels we were in, it was actually part of a nature reserve and all the animals are able to wander around and have free run of the reserve, and one morning I went to go out to breakfast and opened the hotel room door and about 1 ft away from me was the bottom side of a giraffe.
Michael Bird: So what did you do, did you just shut the door…?
Craig Lodzinski: So I stayed in the hotel room for fear of being kicked by giraffe.
Michael Bird: And then ordered room service?
Craig Lodzinski: Yeah pretty much.
Michael Bird: We also have Dylan Foster-Edwards, who is Softcat’s head of the office of the chief technologists. Dylan what is your interesting fact?
Dylan Foster Edwards: Well I spent all weekend trying to think of something. So I went back to my military history. I thought I'd find something that might be interesting. Did you know that on behalf of the Queen, if any military personnel out and about in the street and they spot a cortege walking past they must salute on behalf of the Queen, it is paying compliments on her behalf.
Michael Bird: That is interesting. We have Adam Harding who is Softcat’s chief technologist for end user computing. We also have James Seaman who is Softcat’s account chief technologist for public sector. James what is your interesting fact?
James Seaman: I once bypassed security in Heathrow and Heathrow and myself don't know how that happened.
Michael Bird: Sorry how did you do that?
James Seaman: I don't know. I got off a flight and went through transfers and managed to get to the boarding gate of my next flight and hadn't passed through any security. I was very tired at the time, I don't know how I did it and no one in the airport knows how I did it either.
Michael Bird: How did you find out that you’d not gone through security?
James Seaman: Because I didn’t have the security tag on my passport, so I wasn't allowed to board the plane. So they had to drive me back to another terminal to go back through security. To this day we couldn't figure out... I wasn't on CCTV, they don't know how I got through Heathrow without passing through security.
Michael Bird: That is interesting.
As mentioned at the top of the show you've all brought in some IT misconceptions that you’ve heard from customers, partners and suppliers and I’ve put all of these statements into the official Explain IT hat of mystery. We're going to take them out one by one and I'm going to ask you them or I'm going to read them out and we're going to talk through these statements and understand why they're misconceptions and perhaps understand how it applies to an organisation. So, first one is, “Ethics is just a county to the north east of London,”
Craig Lodzinski: This is kind of leaning on the idea of ethics in technology and we've seen a lot with, we look at what is happening with Facebook and a lot of the cases around antitrust, around privacy, around the use of people’s data as technology is more and more a part of everyone's everyday lives. I think people are starting to get a little bit more clued on to the ethical implications of certain technologies. And in terms of how we utilise technologies like artificial intelligence, like machine vision, when you look at things like the social credit system in China when you look at the use of surveillance and facial recognition trials in London with the Met police, there's a lot more implication that organisations that are developing and deploying these technologies need to be aware, not only of the technology for technology's sake as a lot of technical people might go for and say, “well this is something that's really cool, this is something we’ve never been able to do before,” but actually start to consider the impact these have on people's everyday lives.
Adam Louca: I guess the hard thing about that though, fundamentally, is that regardless of the ethical impact of the technology development, once the information has been garnered you're never going to stop it particularly, and I guess that's the hard bit is whether or not you do ethical impact assessments before you're doing research which is becoming more commonplace in machine learning and AI, research particularly, just starting to inspect that, I think fundamentally you’re not going to stop that information from coming out and I guess at least if it's in the hands of general public or at least it’s available to the general public whether that's commercially or via research institutions like universities, at least there's an opportunity for it to be used for both positive and negative uses. I think if you suppress the idea of developing something because it might have or could be used for negative impact, you potentially then only put it in the hands of people who are going to not follow the rules anyway.
Adam Harding: Who should be responsible for setting the guidelines and the boundaries because I think there's a good amount of middle ground that nobody would be upset with you using their information for, applying these technologies especially when it comes to health and safety and things like that. But there will always be those that push the boundaries too far, so is it a case of we need to wait for legislation to come in place? Or is it a case of we need to be personally responsible as individuals and as organisations? Or is it actually a mixture of the both?
Adam Louca: I think it’s probably a mixture of the both, I guess it's, if you look at something at the Geneva Convention, there are certain standards that the world in general adheres to that we say, “these are certain red line issues that actually shouldn't be crossed and if they are crossed by an organisation they will be dealt with by the world as a collective,” and I think we probably need to start taking some of that approach to some of these more technology specific issues. I guess if you particularly look at things like machines or AI, or some sort of software making automated decisions around killing somebody, is that something that we’re happy to have taken out of the hands of a human because then what do you do with attribution?
Craig Lodzinski: My concern around that is the pace of change in technology is so fast, the legislation and conventions have to be governed by societal impact and society’s own opinions, so if society’s moving, if you take that baseline about accelerating at 1X, legislation and government impact is probably moving at half the speed of that, but technology actually is outstripping society's understanding of it by 4 or 8 times. You look at how long Facebook has been around, the deployment of things like voice assistance, we're only now starting to understand the actual ethical implications of those and as you say, the Pandora's box of it has already been opened, it’s already so widespread, so endemic in our lives that I think, personally anyway, we have to look for the technology organisations and technologists to understand the ethical impacts of a technology, because otherwise you've opened Pandora's Box because of that lag between anybody actually controlling this technology and it being unleashed on the public.
James Seaman: I think the underlying principle here though is ethics is analogue, ethics requires human intervention. A computer, a robot, an AI can't be ethical. If you look at examples in healthcare, in healthcare there’s technologies that exist to scan medical images and find tumours and things that shouldn't be there, electronic observations so AI that will pattern match if you have input on your temperature and your blood pressure and so on, it will alert that your condition is getting worse. But the ethical input is always analogue, it’s always a human, there’s always someone who reviews that information, and takes action on it and that's the thing that I think needs to be foundational for the use of things like AI and automation, in the fact that a computer cannot be ethical. AI and true artificial intelligence is predefined, it doesn't have an ethical reference so I think from a technology perspective, in the use case of technology, it needs human intervention.
Adam Louca: You could probably train it to mimic morality, but it would only then mimic the inputs of what it's observed as being moral. So you might be able to train a computer to, probably with some level of error, to mimic our moral decisions, so the decisions humans would make from a certain culture, at a certain age point, at a certain point in time, I think to get a dataset large enough and general enough that actually reflected the whole world's population in all different socioeconomic categories and all different ages is such a big data set that it's impossible currently to probably train a model that follows that I guess?
James Seaman: You can't mimic that though. Look at the Tesla example around the accidents, the car accidents, so the autonomous drive - how does it decide what to crash, who to hit, who not to hit?
Craig Lodzinski: Well that’s what we're unleashing in these technologies but part of the underlying ethics of it, well self driving cars you have the trolley problem in effect, and that's a philosophical problem that is still hotly debated and discussed and under a lot of question, depending on what philosophical school you subscribe to.
Michael Bird: Can you just explain that quickly, the trolley problem?
Craig Lodzinski: So the idea of the trolley problem, it starts off with you have a trolley that's out of control, that can't brake, and there's two sets of rails, you’re stood by a track switch, and on one track there is one person tied to the rails and on the other track there’s five people tied to the rails. What's the ethical decision there of which track do you choose, who do you kill, effectively? And then you start to add additional problems on top of that so what if the one person tied to the track is a relative of yours and the five people are people you don't know, what if the one person is Hitler, you know there's lots of different ethical questions around that, and it starts to raise questions of, you look at things like the Hippocratic oath and all these kinds of moral red lines that we have, where it's very subjective and contextual and those are things that technology does very very badly, there's no way that a self-driving car, if you look at it, if it has a choice whether to kill the occupant or a pedestrian, for example, what's the ethical choice there? Now if you make self-driving cars, you’d probably say, “well let’s save our customer,” but is that really ethical?
Adam Louca: I wonder whether something like randomness will end up sorting this out, whether you'll have a random number generator for want of a better term, and the chaos of the universe is the dice roll? Just because I think that's the only way to remove morality from the fundamental way of resolving the issue, if it's no longer a moral decision, it's just luck.
Adam Harding: I think we live in an increasingly binary world and this is not a binary answer everyone's ethics is not the same, there’s an ethical spectrum, and even if you ask five, six of us what the outcome should be in that given scenario, the trolley scenario, I guarantee you’d have at least one person that would disagree with the rest of the room and when you scale that out to different cultures, different demographics, just generally across the real world, where there's lots of nuances, there’s a general consensus here that the person who's in control is going to have to continue to make that ethical call and even that may well not be the right one.
Adam Louca: I wonder whether you’d end up getting DLC in the future, where you’d be able to download different ethical packs to your car! When it first comes in the ethical pack is, kill you, save everyone else, “but for an extra $5,000 you can get the mode that will save your life! It's called selfish AI”.
James Seaman: But this is the point, it's the human effort, and you talk about different cultures and six of us not being able to agree, I think with concentrating on things like AI and automation, but if you look at the use case for data, so things we've opted into to share our data, knowingly or unknowingly, look at the Cambridge Analytica scandal, that wasn't AI, that was something we'd opted-in to do, something we’d agreed to use and Facebook users had opted in, whether they knew about it or not, that was targeted - a group of individuals and organisations decided to use that data in that fashion and that's the point. We can talk all day long about automation and AI and robotics but actually when we talk about ethics in technology, and developing it, we could come up with an idea for a really clever application between the six of us today and go to develop it but it's our ethical input on how that data is used, how that app is delivered and that’s the point I'm trying to get across, it's that analogue human input.
Adam Harding: I think there's also, massively it’s about personal responsibility. I think that nearly all of us don't take our own privacy as seriously as we expect other companies, other technology companies to take it. We sign up to every T and C, and tick boxes that we have to click through to get the new update for Instagram or Facebook or whatever it is you're doing. It’s shared responsibility, it’s not all on the technology companies. It’s up to them to make sure that the controls are in place that if you want to opt out, you can, but I think it's up to us to actually be personally responsible for our own privacy, for our own information and to only share it where we actually choose to.
Michael Bird: Ok so just to quickly wrap up this subject then, why is this important to organisations?
Craig Lodzinski: I think because you look at what's happening in the industry, you look at what's happening in society, the scandals around things like Facebook, there’s potentially a breakup of Facebook in an antitrust suit in the US which has been sparked by the ethical implications of the platform. As individuals become increasingly aware of the ethical implications of the use of data, of the use of technology platforms it becomes, not only beholden upon organisations to act in an ethical way, but it is going to have an impact on their top and bottom line. You look at privacy concerns around organisations like Huawei, actions of shareholders in Microsoft and AWS for the customers that they deal in, increasingly both shareholders, consumers and also individuals are much more hyper aware about the implications and if organisations are building these services and using technology in that way, there is a tangible impact on their top and bottom line for sure, but also I think as individuals and organisations we need to think about what the impact of our organisations are on society and if we're doing things that are beneficial or that are actually harmful.
Michael Bird: Excellent and listeners if you want to hear a bit more about some of the stuff we talked about in that question, we recorded a couple of episodes, so there’s the AI and Machine Learning episode which is episode two of this season and Rise of the Machines which is episode seven, that covers some of that stuff in a little bit more detail as well. Ok so next statement, “Hardware is simple and software is the clever bit”.
Craig Lodzinski: I think increasingly we’re seeing a lot of move towards doing things in software there's the whole Marc Andreessen thing from back in 2011 which is, “Software is eating the world,” and we look at the way organisations are going, as Softcat and the organisations we deal with, increasingly innovation is seen to be in what you do in code, what you develop, and the bright work is being done by developers. Increasingly the platform is getting much much more simple. You look at the IaaS services you can get in the cloud, you look at the new server and storage hardware that’s coming out and it seems there's not that much innovation going on in hardware and it's all about utilisation of software. I personally would beg to differ, and this may be because I come from a hardware engineering background, that we're going to see another shift in that moving forward. So you look at organisations like, for example, Graphcore, down in Bristol, who are building custom chips for artificial intelligence, you look at the utilisation of ASICs which are application specific integrated circuits, so custom designed chips that are being used by Microsoft, you have Google using their own custom TPUs, tensorflow processing units, which they make available in the cloud, the work that Nvidia is doing in GPU, more than ever now we're seeing a resurgence in hardware being the thing that's not only the limiting factor, but also the enabling factor on real tangible technological progress.
Adam Louca: Is it maybe that the general computer is starting to become less relevant? So the idea of general processing or hardware that is not particularly good at anything but good at everything is starting to fade away as specialist computing starts to rise again? I guess if you go back 20, 25 years to when the first PC came around there was very much that specialist computing part. You took a general PC and you would slot something additional in to drive some additional functions that you needed and we’re sort of going through that revolution or maybe that cycle again as there are specialist and difficult tasks that are difficult for a general computer to achieve, but are optimisable via specific types of hardware. So I guess that's kind of what we're seeing in that AI and machine learning space, it’s a particular type of computational problem and hardware engineers are optimising for that specifically rather than trying to get a machine that's good at doing everything.
Adam Harding: I think it's unfair to say that hardware isn't a big differentiator, that is the dumb bit. I think actually what’s happened is the hardware manufacturers, whether it be the general ones that we've been buying from for years or whether it be the cloud platforms that we’re really consuming, I think everybody’s just made it much easier to access really clever specialist stuff, and especially when you factor in the AWS and Azure and Google plays, the chipsets that these people are designing to do extremely specialist jobs, most organisations, 99% of organisations, would never ever have even dreamt of being able to afford a specialist bit of kit, and I think that the entry barrier to buying very specialised hardware has just really dropped down. I think they’ve just made it easier, I think that masks the sophistication beneath the covers, so I think it's really unfair to suggest that the hardware doesn't… I mean it's a marriage really, one without the other is useless. Not all of the magic is in the software
Adam Louca: Maybe the question is, could almost be better thought about is, it’s neither the hardware or the software that really matters because fundamentally everything is just a tool in a kit bag. I think having people who actually know how to utilise that technology and to get the most out of it, there's no point in buying an AWS instance with a massive amount of specialist computing kit, by itself does nothing, you know, and even with a good software engineer who knows how to programme for that platform, it still does nothing, it still needs somebody to say, “well this is how we’re going to take advantage of that technology development, this is how it’s going to make us a better whatever type of business,” so I guess the magic is still in the humans.
Adam Harding: The software is just the bit that connects it, the software is the bit that connects your processes, your people, your creative side to the back end that drives it. It’s just that we have so much more performance now than we ever had before, at such a low price point, with such ease of consumption, I think you’re right, it’s actually the creative thinkers behind all this stuff, it’s the people behind this stuff that make the real difference. But the software is that glue that connects you.
Michael Bird: Ok, so next statement, “Can/will I fix all of my problems by implementing Windows 10?”
Craig Lodzinski: No, next question.
Adam Louca: Well you’ll get rid of one problem. You won’t have Windows 7.
Adam Harding: And that wasn’t necessarily a problem!
Adam Louca: I have an opinion on this and I'm not sure it's going to be popular.
Craig Lodzinski: Is that just going to be your custom sting for everything?! “Adam Louca I have an opinion on this!”
Adam Louca: What do we think, how we feel about planned obsolescence within the technology industry? Fundamentally planned obsolescence, you can argue that these platforms that have been around forever aren't, by themselves broken, they are designed to break, they are given lifecycles and those lifecycles, yes, are driven by technology improvements, but fundamentally they are also driven by commerciality and profitability, if we made the one perfect operating system and sold it once those companies wouldn't be very happy because they wouldn't continue to make their revenue streams. I do slightly struggle with ‘“Will Windows 10 fix everything?” because it won't because at some point Windows 10 will be broken and whether or not you believe the evergreen is never going to change, but fundamentally it's true, you know, we will elevate and evolve that operating system into something else that is more suitable. Windows 10 will fundamentally solve the fact that you won't get updates for Windows 7, that's the main thing that everyone's chasing away from it’s the fact that you won't get patches for vulnerabilities that come out. So you are being pushed onto a modern operating system, and you’re being pushed to move forward. Now arguably there are benefits to doing that, there's lots of performance benefits, lots of security benefits moving to Windows 10. But probably back to the previous question we spoke about is, is that innovation? Is that what's going to truly make your business better that you've now got a Windows 10 operating system rather than a Windows 7 operating system? In my opinion, probably no.
Adam Harding: What we've got to be careful of here is that if we are aiming for perfection in any of the layers of technology that we’re trying to deploy, whether it be hardware or whether it be software, we’re going to get it wrong, it's doomed to fail because everything else is a moving target, so I think that, with regards to your planned obsolescence piece, I think that’s a little bit loaded, but what it does do is encourage, it enables the organisations, like Microsoft, Apple or Google, I appreciate a lot of us think they don’t necessarily need the money, but it enables them to fund constant innovation on our behalves, because as we spoke about in the other section, a lot of what our customers’ organisations, they don’t have time to properly innovate, so they need other people to do it on their behalf and I think that a lot of it that’s what this is about. Going back to the original question, is will Windows 10 fix all your problems? Well obviously it depends on what your problem is, but Windows 10 is just the springboard, it’s just a springboard, and normally where I see Windows 10 as actually having been a problem recently is there's been a wave of digital transformation projects come out which are digital workspace transformation, things like that, they’ve all started off with the right intentions. Lots of good key aspirations about getting better employee engagement and attracting the best new talent and making sure everything is always as secure as it should be, as fast as it should be, and performant as it should be, and you’re as free and as flexible as you should be, to be brilliant at the job you've been brought in for, and that's how these projects, or that’s how the opening gambit always starts and then what quickly happens is the budget for these wonderful lofty ideas recently has just been claimed and land-grabbed by Windows 10 upgrades and Microsoft 365 movement. In itself that's not a problem, those are very useful fundamental core building blocks, but if we’re talking about if realistically these digital workspace transformation projects have ended up being Microsoft upgrades with some new devices if you can afford it, then that's not delivered on creating a beautiful user experience that makes it easier for your colleagues to be brilliant at whatever it is they’re employed to do. It's not looked at, and it's not just about technology, it’s not giving us the opportunity to look at new processes and new ways of streamlining line of business challenges and realistically, I think that Windows 10, it's not a necessary evil, it is a far more secure platform, it’s far more resilient, it’s far more compatible with the applications that we've been throwing at it, but it has ended up being a distraction from the problems we were originally trying to solve, which is how do we make it easier for people to be brilliant at the jobs that they've got to do without technology getting in the way, and I think that simple things like, everybody talks a good game about employee choice and what’s really happened is, we’ve now resolved to a position where you have two choices, one Windows machine or another Windows machine, so we have choice with a little c, I suppose, and what all these things originally kicked off with was, can we give true employee choice, can we make sure that people have a choice between a Windows device and an Apple device and Google Chrome device?
Dylan Foster-Edwards: I guess, I think the thing that we talk to customers about though, it's not about what software you’ve got running on it, it’s not about what device you’ve got, it’s about understanding what you need to do to make a user’s experience better. It’s understanding what's working, what's not working from today that you need to change to make it work better for them in the future. So we talk about understanding how they use technology in their daily life, what is limiting their ability to do their job as effectively as possible, so we spend a lot of time, and customers come to us and say, “ I want to do a Windows 10 upgrade, well, why? And then start to delve a lot deeper into understanding what they really want to try and achieve, rather than just going, “well how many devices do you want to deploy? What one do you want? And how quickly do you want them deployed?” So we spend a lot more time understanding the process, as you said, understanding how their organisation is segmented, what different types of users they’ve got, how they can use their technology and what device would suit someone out on the road, compared to somebody in the office, compared to somebody stuck up a mast of some kind. We have to make sure that we're developing the solution to meet the need of the specific user.
Michael Bird: Ok so, “Edge computing and 5G are overhyped.”
Adam Harding: I think it's just timing, I think it will not be over hyped in the end.
Michael Bird: So just quickly define edge computing.
Craig Lodzinski: So the idea of edge computing is, obviously at the moment we have a very centralised system in large data centres and they cross between the systems and networks become more distributed, we’ll start to move computing to the edge, this is using network terminology effectively, so your core is your traditional data centre environment, edge is much closer to where users individually access that. And the idea behind edge computing is placing compute resource adjacent to the network edge adjacent to where data is created and where users are accessing the Endpoint and moving that away from the data centre.
James Seaman: If you take Adam’s point earlier on around Core ML being on the iPhone, that’s edge computing, that's the smart edge, so that's having the capacity, the hardware and the software capability to be able to transact locally.
Adam Harding: I don't believe it's over hyped, I just think it's a bit premature at the moment to be deciding its fate. I feel that as we move forward, greater and greater use cases are going to be found for making decisions at the edge. I don't want to have to call back to the cloud to tell my car to hit its brakes.
Dylan Foster-Edwards: I guess the over hyped thing, so I heard at the weekend, Glastonbury was on this weekend, and was allegedly the first event that had full 5G coverage. So one, there was most probably only one device that someone had that could actually use it, so again, that is a little bit of hype would suggest ‘everyone had 5G access’ which clearly they didn't, so I’d be interested to understand why that is, but I guess rolling out 5G, we know how hard it was to get to 3G, 4G, we know how much more density is needed with 5G, what they're going to be called, Lodz?
Craig Lodzinski: Base stations.
Dylan Foster-Edwards: Can we realistically believe that that is actually going to get deployed in our lifetime, considering how bad we are at deploying technology?
Adam Harding: Yes it’ll get deployed, it just might be late. And I think that's probably going to be the whole 5G story. We will get coverage eventually, I wouldn't worry too much about planning for it within the next six months or anything, but I think actually the requirement for edge computing and taking processes and decision-making and AI and machine learning to the device that's in your hand, in your office, near to where the decisions are being made, that will drive the requirement for 5G and a huge expanse of other networking.
Craig Lodzinski: Absolutely, kinda two things on this, so in terms of 5G deployment there will be some aggressive deployments in areas and test sites and potentially very built-up areas, I think millimetre-wave is going to take much longer to be deployed outside of very very specialist use cases.
Michael Bird: Just a quick one - millimetre wave?
Craig Lodzinski: So millimetre-wave is the part of the 5G spectrum that’s outside of the traditional GSM spectrum, it uses a wavelength that is designed for very high bandwidth, very low latency, but because of that it actually operates over a very short range, so very useful for point-to-point communications, things you might traditionally use microwave for, but not so much for a standard solar radio access network. I think went into a little bit more on the 5G episode of Explain IT. The millimetre wave will be deployed a little bit slower, but in terms of general consumer access and the base stations that are being deployed, it inevitably will be deployed in our lifetime because this technology will become obsolete, you’ll have to have a natural refresh cycle for mobile operators and if you're going to refresh it and you've already paid for 5G spectrum there's no reason why you wouldn't. In terms of edge computing, to my second point, the one I always go back to on this is the Three Laws. You have to look at the laws of physics and that is the speed of light in the vacuum as a constant, so stuff that is further away takes longer to get to, so to mirror Mr Harding’s point you don't want your car having to call back to a data centre if you’re here outside our office in Marlow and your data centre has moved its workload to Amsterdam, that latency when you’re barrelling towards little Jonny at 30 miles an hour is not going to be particularly useful. There's the law of economics around the cost of these things, the cost of moving data up to the cloud, moving things over the network so that large workloads will increasingly be processed on the edge because of the cost of moving them, and then there's also the law of the land in that you may have certain policies of what you want to do, what you want to process on the edge, what you want to move over the network and also increasingly, legislation on how we move data in between countries, territories, availability zones etc.
Adam Louca: I think it's, we're starting to see in emergence of this idea of smart processing, so this idea that policy, whether that’s business, security, performance, over what you're trying to compute will start to dictate automatically where is the optimal place to run it. I look at SD WAN as one of the earlier examples of this type of technology, this idea of mulitpar thing, of best par things using lots of different mediums, so having a 5G, 4G, 3G cellular connection, having an mpls link and then having broadband, that's kind of like the edge problem but in a network construct. The broadband is high bandwidth but cheap but it's not guaranteed and it's not and it could be latent, the mpls is very expensive but potentially slow but high stability, and 4G might be rapid to provision and relatively cheap but it isn't as guaranteeable and it isn't as fast as some of those other link types. So the trick is not about whether or not it's over hyped, it’s whether or not it's relevant to you and the relevancy of the technology is what matters. If you fit into one of those three use cases or triads that Craig just mentioned there about law of the land and about physics then you will optimise for the best work load ideally, not everything needs to go to the cloud, not everything needs to be done on the edge, it isn't a binary decision, these are just tools.
Michael Bird: Ok so just to wrap up then, Edge computing and 5G are overhyped. Why is that important to organisations?
Craig Lodzinski: Like any technology that hasn't been fully realised yet, there is a play from vendors and technology companies to demonstrate the promise of that, to drum up interest, to drum up financing if your startup, or you’re looking for investment. So absolutely there is more hype right now than there is realisation because it hasn't reached primetime, but that's the case with any technology. I think the promise of 5G and Edge Computing and the new use cases for that, and the evolution of what we do with data and computing absolutely has a lot of promise but there is always going to be that time gap between the marketing phase of what we’re prophesying and the actual execution phase. So while not wishing to sit on the fence, it might be but we kind of don't know yet because we haven't tried it out.
Michael Bird: “Simply spending more on security will make me more secure.”
Adam Louca: It depends I guess, is probably the answer, but I think the kernel of this is no. Just investing in security tooling and buying new security software can make you more secure, but I don't think it's a default position. I think the majority of companies are spending more than they ever have on security, I think if you listeners are listening here and you ask yourself honestly do you really feel more secure than you did two years ago, three years ago, I think probably for majority of customers, at least I speak to, they would answer that they probably feel the same if not maybe less secure, the risk has evolved, I think fundamentally, but the number of pounds that people are spending on security increases every year and for me it's, we need to get away from a separation that buying things will just make the problem go away, we need to learn how to actually quantify and bring the problem to a surface so that we can actually understand what is our risk. I think a lot of customers are sitting there currently today not understanding their risk profile, number one, but also number two, how they should minimise that risk and therefore they're probably stuck in a state of security anxiety. I almost think about it like a panic attack for customers, you know, you're worrying about something that is unquantifiable, or you haven't quantified, therefore you’re sort of sitting there in a state of unknowing and I think as soon as you put bounds on something, as soon as you can understand the extent of an issue then you can make a plan and you can make a programme to remediate that and become comfortable with your progress. I think without that you will just throw money and hope that things will get better.
Dylan Foster-Edwards: Speaking to a lot more customers recently, they seem to just buy tons and tons of software and I think for me it's about, do they really understand the problem they're trying to fix because there’s a lot of hype, they get a lot of vendors banging on the door saying you’ve got to buy this technology cos it’s the best out there, and I suspect there’s tons of overlap, they don't deploy hardly any of the technology. What do you see as the key things to try and generally understand what the problem is before they go off willy-nilly and buying tons of tools because people tell them it's the right thing to do?
Adam Louca: I kind of always have that saying that security isn't hard it's just hard work. And I think that really boils down to the fact that people don't want to do, for want of a better term, the un-fun stuff. And the un-fun stuff is the foundational work, it is what are your assets, what are your data assets, how important are they to you, what are your risks and threats and vulnerabilities and ultimately what level of risk will you tolerate, therefore what security mitigations or controls do you need to put in place? And the problem is is there isn't a piece of technology for that, you can't just buy a thing and it will do it, you need to put time and effort and have the right expertise to make those decisions and I think organisations don't, either willingly don't invest in the right people with the resources to do that, or alternatively don't know that they need that type of skill set within their organisation. I think it's a little bit like trying to run a company without a CFO, actually most organisations will have someone who will look at corporate risk or look at that risk profile from a financial perspective, it's a bit like going, “Oh we don't really track our money because I'm sure it'll be ok.”
Dylan Foster-Edwards: Do you think that means you should actually invest in having qualified people rather than lots of tools?
Adam Louca: Yes for sure, 100%. The people are the ones who are able to provide the context and the context is everything in security, knowing who you are, knowing what you do and knowing who's likely to target you is what enables you to make those decisions and I think whether or not that's a resource you have internally or that's a resource you contract in or that's a resource that you go out to market for, is less relevant but I do think it is important to have that because without that I think you are unable to make a sensible decision and you're unable to put bounds around this problem.
Adam Harding: And I think also people can get very distracted by buying technology and by skilling up the people that turn settings on and off. I think part of the security challenge is actually getting your broader user base ready. I think a lot of it is understanding the processes your organisations use and how can security protect them and serve them better and I think there is a lot of good money thrown after bad on security tools because paranoia and most of it well-founded to be fair, rings so highly so I think there's something to be said for making sure your user community understand their role that they play in it as well and how they should behave in certain situations and feeding and watering them and training and doing awareness pieces with them, constantly, because the threats change. I think that's quite important, I don't think it's good enough to just sign an IT policy when they join the company and then 15 years down the line expect that they have matured as all the threats have.
Adam Louca: I think it has to come back to define what are you trying to secure, against who and to what level. Without answering those three basic questions I think you are just throwing money into a black hole and hoping that it's all going to be ok. I think you have to strip the problem back and again I think like a lot of the questions we’ve answered today, it is those business questions that are not technology-led that often aren't the ones being asked because they're not of interest to business leaders and I think that they have to become of interest to businesses that want to exist in, I guess, this digital native era.
Michael Bird: There are a couple of episodes that we’ve done this season that talk about security in a little bit more detail so there’s episode four which was a Security Trends episode, episode 6 which was our Supply Chain Attacks episode, and episode 12 which was our EDR episode as well, so you can check those out. Ok so our final question, or final statement, I should say, “The cloud is always better and cheaper.”
Craig Lodzinski: So this is one that again we're doing a lot of fence sitting and it depends here but it really does depend and what we do a lot of here at Softcat is helping customers to understand the total cost of cloud, so not just the difference between buying an IaaS service versus a physical piece of server, storage, network, hardware etc, but actually understanding the broader cost, so because there's lots of different tiers, lots of different services inside public cloud, when you look at the difference between, say a platform as a service, or a software as a service offering, what you're also getting in there is things like configuration, like security, all these different capabilities that are managed for you that are offloaded and you compare that to the cost internally, additional software costs, additional hardware costs, training individuals and the cost of time, you really have to weigh up the whole total cost of ownership and understand all the implications rather than just specific clock cycle per clock cycle, second per second costings.
Adam Louca: From a security perspective I would have to argue that it probably is cheaper and that's mainly because of the shared responsibility model. Ultimately you are outsourcing a percentage of that underlying security technology that you typically would have had to invest in on premise. That being said I find a lot of customers don't know how to utilise and actually get the most out of the underlying resources and functions that exist on the platform, so I guess what you're doing is you’re fundamentally trading off upfront investment in technology or new hardware or new data centre equipment but you are stretching that cost out in things like training and enablement because actually without having people who are able to turn this functionality on you are just doing, as Craig said, you're just doing a like-for-like comparison on how quick and cheap the fundamental processing is. So it’s that day to day operations course that is really the true cost of cloud, but being said I don't think there is any, well, I think there's very few organisations who could build a data centre that has the physical security capabilities of any one of these tier 1 data centres in the world. I don't expect that there is anyone out there who can run the security operation centres that these people are running, to monitor and manage those platforms and I don't believe there's anyone out there in the world who invests as much in innovating and developing and extending those platforms at the rate that they do, so I do think you kinda have to get off the fence and get on the journey with cloud but I do think it comes back to understanding that lift and shift and trying to move your existing security control models and fundamentally your resources in the same format is not the way to consume cloud so if you aren't prepared to make that decision, then my personal view is don't consume cloud as anything other than SaaS. If you are not an organisation who is relevant, who is prepared and capable to transform your applications into a modern cloud architecture, then work out really what functions you need and try and take those all as a SaaS service, and minimise your operational cost and crunch IT down fundamentally. Actually it becomes about the functions that you deliver, but if you do want that flexibility, control and ultimately to define your own destiny then you've got to go for a transformation with your skillset internally so you're able to get the best out of the cloud.
James Seaman: The shorthand version, is the classic IT model does not work with public cloud. You have to transform multiple elements of your IT service, not just your technical platform if you want to consume public cloud in an efficient manner and then the only way you can really understand whether it is cheaper is to understand that TCO. But the classic IT model does not work, it does not stack up, you have to transform your entire IT organisation to utilise public cloud effectively.
Craig Lodzinski: I think there's a lot of cost transfer as well and moving around, you can't take the traditional cost allocation model that you can on-premises. You look at a capital acquisition of hardware, once you’ve put one service on it the rest’s effectively free, the marginal cost diminishes with utilisation because the more dense you get on that platform, the better. In a cloud environment where everything is billed incrementally and you have a flat rate, effectively, although there’s certain benefits allowed, based on your usage, spot instances, reserved instances etc, but you have a more direct cost allocation model in that scenario, but also as Adam’s pointed out, the investment’s being made in security operation centre, the investment’s being made in configurability and tools you have. To use things like public cloud efficiently you have to look at what workloads are best suited to that and understand that in order to do that effectively and take advantage of all those benefits you also have to look at re- architecting your applications. If you're using something that's very versatile that’s really ideally suited for a cloud environment, it has to be a cloud native architecture. Lifting and shifting VMs, there is a use case for that and we look at services like VMWare Cloud AWS. That’s a service that’s really well tuned for traditional VMware workloads, but if you’re putting webscale applications out there you need to look at containerisation, at serverless functions, how you utilise your storage and network differently in order to reduce that cost base down so that all the other features that you’re getting on top of that become more cost-effective and the overall TCO becomes realistic when compared to on premises.
Adam Harding: So yes I think this is all correct, I think realistically it's about balance. Not all workloads are appropriate for the cloud because, and it might be a regulatory reason, it might be a cost reason, it might be a legacy reason, but I think that when you look at the processes that technology’s supposed to be supporting there's plenty of occasions whereby actually edge computing and machine learning as a device level in the intelligent edge is far more suitable. However for the rest of the workloads that support that process, hooking into one of the hyperscale clouds so you can take advantage of the security, reliability performance, scale and all that other good stuff is perfectly right, I think it would be a mistake for anybody to be purely cloud or to be purely on premise.
Michael Bird: Ok so the cloud is always better and cheaper. Why is that important to organisations?
Craig Lodzinski: I think it's important to understand that, as we've discussed, the operations model that you're running, the way you structure your IT team, the way you select your services and the way you understand the cost of these services, because as James pointed out, the operational model, the way you run IT services, the way you run IT in public cloud is very different and everything that falls out underneath that is very different as well, so looking at the pure sticker price, looking at the label on the tin is not good enough in order to understand the complete cost, but also understanding the benefits. I liken public cloud a lot to a hotel in that you get a lot more bells and whistles but you may not want to spend 365 days a year there because it becomes expensive for the really mundane things, but also, perhaps preaching to the crowd here with the office of the CTO, hotels are really useful for business and if you're going out there and earning money and if it's something that you're monetising, if there's a tangible benefit to you that you can then create revenue or derive profit from, that changes the cost model again and that innovation catalyst, that democratisation of services like artificial intelligence, like machine learning, all the new capabilities that are available in public cloud that can be innovation and revenue drivers further complicate that cost base, so it's really about taking a holistic view to not only what all the different IT services, like public cloud, edge computing, AI, 5G, I’m trying to cover all the buzz words that we've covered so far this episode, taking that holistic view and understanding the impact on your organisation is really important to understand the financial implications of any IT transformation.
Michael Bird: So Adam L, Adam H, Craig, Dylan and James it’s been really interesting talking to you all, thank you so much for your time. Listeners, if there's anything in this show that has piqued your interest, or if you’d like to talk to someone at Softcat about anything we’ve talked about this episode, do you make sure you check out the show notes, I'm going to put some of the stuff we talked about today as well as some contact details if you'd like to get in touch. Now putting together the Explain IT podcast involves lots of people so before we wrap up the final episode of this season I just wanted to say quick thank you to a few people, firstly to the 18 incredible, interesting and insightful guests that we’ve had this series, so that’s Dylan Foster-Edwards, Dean Gardner, Adam Harding, Craig Lodzinski, Matt Armstrong-Barnes, James Seaman, Philippa Winter, Brett Walmsley, Matt Helling, Dan Wiley, Adam Louca, Rob Hillier, Joe Baguley, Andy Hardley, Martin Myers, Jaspreet Singh, Rebecca Monk and Russell Humphries. Thanks also for production support from Laura McLoughlin, Daisy Mossop and Robert Murgatroyd, transcription, copy editing and digital support from LJ Stocks, editing from Harry Morton and the team at Lowerstreet and of course a huge thank you to you for tuning in and listening to this podcast. So that's it for Explain IT season two, we’ll hopefully see you next year for Explain IT season three, but for now, I'm Michael Bird and thank you for listening to Explain IT from Softcat.
Episode 1: 2019 Tech Predictions
Episode 2: AI and Machine Learning
Episode 3: The Future of IT in Healthcare
Episode 4: Security Trends
Episode 5: 5G
Episode 6: Supply Chain Attacks
Episode 7: Rise of the machines
Episode 8: Unstructured Data
Episode 9: Quantum Computing
Episode 10: Multi-cloud
Episode 11: The Future of Work and Workplace
Episode 12: Endpoint Detection and Response
Episode 13: IT Misconceptions