August 10, 2023

Interview with Hays - The Impact of AI on the Future of Work

On Thursday I was interviewed by Yvonne Smyth, who leads Hays’ HR & Legal practice in the UK & Ireland. The event was for several hundred senior leaders in HR teams across their network as part of a series of Hays Insight webinars, building on the 2023 report, What Workers Want: Working with AI focusing on GenerativeAI and the Future of Work. 

We had five main areas of conversation:

  1. Introduction 

Why has AI been in the news recently? How does Generative AI differ from what came before? Why all companies are using AI already? Specific examples of use cases in HR teams. 

  1. Jobs Of The Future 

How to look at business value offerings & where they can be disrupted; Industries with unmet demand;  Jobs are bundles of skills and we are only automating some; Challenges for content producers and costly humans in the loop;  Augmenting current capabilities; Reducing labour shortages; Future gazing (because we couldn’t resist!) 

  1. Skills 

Working with AI: Empathy, judgement, taste and coordination; Working in a workplace that is being changed by technology: Adaptability, willingness to learn; Digital Journalism as a model(?)

  1. Leaders And Those They Manage 

New pathways to seniority; The productivity dilemma: what to do with newly freed up time of junior employees

  1. Training And Culture 

Preparing for a world in which change happens faster; A cluster of General Purpose Technologies; Preparing a response (1) How big? (2) How urgent? (3) How are we set up to move forward? Keeping employees onside during change; What training L&D teams should be offering; What should you do in the near term? 

Given the focus on People we didn’t go into other topics that we have recently focused on like security & workplace policies, value propositions, or the tech challenges of building AI tools, but we have published other articles about them here

I’ve written up the full discussion as many of the topics will be of interest to other organisations grappling with the introduction of AI into their workforces. This is a detailed written summary of the event (rather than an exact transcript) which will let you read along quickly rather than watching it in full. For a copy of the recording I’d recommend getting in touch with Hays, whilst to discuss applying the topics to your team and organisation in particular reach out to us at Paradigm Junction. 

Introduction 

AI seems to have been all over the news in the past few months, but it is also something we have been hearing about for years. What is it that has gained so much attention in recent months? 

There’s a recurring pattern, that something is only called AI until computers can do it as well as humans and then, after a brief flash of interest, we quickly accept it as just something software can do. If we think back 10-12 years, image recognition was a problem that hadn’t been solved, you couldn’t ask a computer whether a picture was a bird and get a good response. But in 2023 our smartphones will group photos by the people in them, or you can search for pictures from your birthday and it will pick out the ones with cake. We have stopped being amazed by something that was impossible until recently and that’s characteristic of things that we still call AI. 

AI, at least the types that people interacted with regularly, had a shared characteristic of being incredibly good at specific tasks they had been trained on. Image recognition is the name for what I just described. AI systems also became really powerful at things like recommendations (think Amazon or Netflix), decision support or automated decision making (for everything from financial trading to advising judges on parole decisions), anomaly detection (for spotting credit card fraud, or faulty items on a production line) and forecasting & demand planning. One of the things that characterises these systems, which we think of as Deductive AI, is that they take in huge amounts of data about the state of the world and then output relatively smaller decisions: a yes or a no, 5 recommended movies, that sort of thing. 

The AI that has come to prominence since ChatGPT was released last year is altogether different, it's called Generative AI because from relatively limited inputs, such as a short phrase, it is able to create much more. This has been exciting researchers since the release of a paper Attention Is All You Need in 2017 which described the underlying architecture of a system which would give rise to these properties. 

In recent years two things have happened. Firstly, huge amounts of data and computing power have made these systems really good and really powerful for text generation. Secondly, lots of other types of output, music, images, video, computer code, have been shown to be susceptible to largely similar methods. All of a sudden we have software that can produce multimedia content that feels humanlike across a huge breadth of domains and specialisms. That’s fundamentally new. Given the huge array of processes where human work or intelligence or some combination of the two is the limiting factor, there is a lot of excitement that this technology will quickly change how we work. 

Our recent What Workers Want report showed lots of companies aren’t using AI yet, to what extent are organisations using AI? Is it showing up in an HR context? What about the latest generation of tools? 

This is a common misconception we see with companies we have worked with, who think that AI isn’t something they are doing. From some of the examples we just discussed, there is already AI baked into lots of the way we work. To take a really simple example, if you take credit card payments, then highly trained, fraud detection AI systems will monitor the transaction before its processed. Even the latest type of AI, Generative AI, we now know has been powering things that everyone is familiar with, like Google search completion, or the suggested greetings to email messages present in Gmail or Outlook for the past few years. 

Companies are confusing having AI as part of their regular workflows with going out and building a specific piece of technology themselves, which isn’t something most companies will do. But everyone will have Generative AI pushed at them in the services they are already using. The real questions companies need to ask are: How will the services we consume from suppliers change? Should we do anything bespoke? How will our employees' ways of working change? 

In an HR specific context, we have heard of larger companies using AI as a tool to assist in screening written or video applications from candidates. Those tools will certainly get a lot more capable as they embrace the latest generation of Large Language Models, the particular type of AI that lies behind systems like ChatGPT. Nevertheless, we would caution that candidates are clearly using these systems too. It will soon be trivial for someone to answer a few biographical questions and then have an AI assistant read every job posting that appears online and submit a personalised, tailored application. Recruitment teams could see the number of applications they receive going up 100x. So it’s really not just a question of what you can do with AI, and whether you should, but also, what does Generative AI allow others to do to you and your processes, and how will you respond. 

I think the application example is probably familiar to lots of people in the audience already, so I will briefly touch on one totally different use case to illustrate where these tools are particularly useful. Anyone who has written a business policy from scratch, say a Code of Conduct, will know that they can end up being quite long, but that large stretches of them are pretty similar from company to company. Generative AI is perfectly suited to a workflow that looks something like (1) Download 100 Codes of Conduct from other companies and upload them to the AI system (2) It will pull out all the elements that are common to all, or almost all of these policies, and write this in clear and concise language that everyone understands (3) Then, and this is where we start seeing these systems augment human workers, they can throw up the 10 points of substantive difference, where other companies have taken differing decisions, allowing the HR professionals to choose between these different implementations. In short, focusing the human workers’ time on making judgement calls that they are expert in, and removing a lot of the boilerplate or pro forma work which would previously have landed on their desk too. 

Jobs of the Future 

Lots of people seem concerned about Generative AI taking their jobs leading to societal mass unemployment. How do you think about this concern? 

It's certainly easy to see where the fear comes from. Free, humanlike labour is certainly going to present a competitive challenge to many tasks that employees in some sectors undertake, as other types of technology have before. There has been a powerful trend in recent years of innovation leading to fewer high tech, high paying jobs and fewer generally skilled roles. I read an article the other day about a new car factory in Germany which only needs 2 staff on shift at any one time and even they work remotely. With Generative AI in particular I have a few thoughts that I would keep in mind. 

  • Lots of the impacts will be felt at the level of business models. Some companies will inevitably try and save costs, which I’ll come back to in a moment, but much more fundamental to the way our economy and society looks will be which business models survive. If your value offering has been about access to privileged information (such as many professional services), or skill in operating some information process, or relatively undifferentiated communication, then you should absolutely expect that AI enabled businesses will come and compete. This is something we have written a lot about and are recommending every company we work with to undertake an analysis of how their particular value offering is affected by Generative AI. 
  • Secondly, lots of industries have huge unmet demand that it is currently not economically viable to serve. Getting a lawyer. Health advice. Personalised education. I would expect that as prices of these come down, the demand for them will sky rocket, and historically we have seen large increases in employment in associated industries when something like this has happened. 
  • Finally, I’d highlight that so many jobs, if not all jobs, are really a bundle of jobs that we find it convenient to have done by one person. Coordination, company culture building, helping customers navigate a system, even if that’s not their role on paper. Horizon scanning. Idea generation. These are all roles that many employees fulfil, or have the potential to do, but are rarely written in job specs and certainly not trained into AI systems. Take supermarkets, where so many of the checkouts have been automated. Or Amazon Fresh who went the whole way and removed cashiers entirely. What have they found? You need a huge number of security people. Staff to help direct customers. Staff to have a chat with a lonely customer who is really choosing you over your competitor because of the social interaction they get in your shop. Jobs are a bundle, so be careful when thinking about machines replacing humans. You will potentially lose other functions that the human was bringing outside the one which is then done by the AI. 

Which types of jobs are particularly at risk? 

So that all said, there are clearly some roles which have a higher share of potentially automatable tasks. McKinsey have written quite extensively on this and it always generates a nice headline when they find that 40% or 80% or whatever of jobs could be automated in the coming years, depending on how you define them. 

But there are a number of characteristics, certainly, that define jobs which are at risk. 

If you are producing content which someone else could just as easily write, where you don’t have specialised local knowledge, which can’t be gained from a training manual, or examples of work on the internet. There will always be a market for bespoke writing, or graphic design, but for, lets say, the middle of the pack, it might quickly become possible to do that for almost free. Again, some actor friends have told me that voice over work has almost totally dried up and that used to be how several of them supported themselves between roles. Now, a computer can do it in whichever language or accent you are searching for. 

If having a human in the loop is a relatively high cost for a business, then having that replaced by a “humanlike” AI is going to prove compelling for lots of businesses. Customer Service roles are one of the first places that we have been able to prove the productivity impact from using Generative AI as a support tool for agents. One of the fascinating things from the paper was that almost all the improvement was explained by bringing weaker and newer employees up to the level of the very best, who it seemed the system was learning from. This points to a crucial role for customer service agents in training these systems, it's not something that can be done without employees on side. This clearly varies by business. One of the reasons I love my favourite coffee shop is because I like the type of people who work there. But for businesses where I already interact only through an online chatbot, does it necessarily matter to me whether there is another person on the other end? If the resolution rate is as good or higher then I think you would argue not. Nobody wants to end up with more automated phone lines though - everyone just wants to speak to a person there, right? Which speaks to lots of things about flexibility of systems and who has power to go beyond a really narrow set of initial options presented, but that’s a topic for another day. 

What new types of jobs could emerge? What opportunities does this open for those in the workforce or about to join it? 

Firstly, I’d say look at the industry. You are seeing this with the strikes in Hollywood, where an industry feels under threat from automation. But those industries where there is huge unmet demand could be good places to look. Law, education, healthcare are all places where people haven’t been able to get all the personalised support and advice that they wanted because of cost, but where AI could potentially bridge that automation gap. And if that happens, whilst a lot of the work will be being done by AI, you’ll need new employees to do accounting, and product management and marketing and HR for the new companies that emerge serving customers who just previously weren’t being served at all. 

Next, there are places where the augmentation from using AI tools, at least in the short term, makes it much more financially viable to hire an extra employee. One of my friends runs a consulting business that works a lot with the government and one of their limits on winning work is how much time they can dedicate to writing really long bids in response to government RFPs. They have found that, with AI support, each human bid writer can now submit something like 2-3 times more bids, enabling them to win more work. So long as their success rate stays the same they’ll actually be hiring more people into this team. 

More generally though, there are a whole bunch of potential obstacles that are removed by AI support. Coders, software developers, data scientists, call them whatever you like. People with the skill to write code, have been in short supply across the economy for years. Well, happily, one of the things that Generative AI is best at is writing code. All of a sudden people who might have refrained from starting a business because they couldn’t build a website or an app have that option available to them. Businesses that weren’t creating new tools or features, similarly, should find that the productivity of their software developers increases and that potentially unblocks things for other employees around the organisation. This is one of the reasons that Sam Altman says ChatGPT in particular is a huge leveller for society, because it can dramatically upskill workers with few or no technical skills. 

And then you have a whole array of future jobs that are fun to speculate but hard to pin down. There seem to be some jobs, often called prompt engineers, whose skill is in magic whispering to ChatGPT and other Large Language Models. Personally I think these jobs will disappear as the systems themselves become more usable and we start to interact with them in ways beyond a text box. There will always be a demand for originality to keep training these models and who knows how that demand shows up. Paying people to collect data that has never been recorded online before. Paying people to create really wacky art or cartoons. Paying people to be good judges of fashion and taste. 

As I said, fun to speculate about, but hard to pin down in terms of certainties as things evolve so fast at the moment. 

Skills

What new skills become important? What are the skills that everyone needs to have as a core proficiency level (like using e-mail today)

We think of these as divided into two main categories: 

  • Skills that are useful for working with AI tools in the workplace 
  • Skills that are useful for working in a workplace that is using AI tools

These are different types of skills in the way that email made it important for employees to be able to write clearly, where previously communication was verbal, but it also made it important for employees to be able to prioritise because suddenly they might be getting 100 or more messages a day from group email chains. 

The first category, for working with AI, the most important skills are going to be those that complement things the AI can’t or doesn’t do. AIs are very good at producing outputs like the information they were trained on, so this question basically becomes, who do humans get access to that AI’s don’t. These are all the forms of knowledge that are never recorded in a systematic way. Social relationships between people. Someone’s facial expression and the likely indication of their mood. Subtle judgements about non-verbal things that get lost even when you try to put them into words. Humans are very good at these intuitive judgements, often using tacit knowledge. However, we very rarely record them, so they are missing in the datasets that AI were trained on.  We might summarise these as: 

  • Empathy - For roles that involve caring, but also Product Management and Sales. This includes responding to how someone in front of us is reacting in the moment in ways that might well not be verbal. 
  • Taste, Editing & Curation - For roles that involve content. Generative AI models have a tendency to produce middle of the road content that, once you have read a lot of it, feels samey (in exactly the hard to describe way I mentioned above!). We need to move away from using Generative AI and asking “Is this good enough” to, instead, using it as a first step and then asking “what input can I add to this to make it better”.  
  • Coordination - For roles that involve working with multiple teams. Generative AI can adapt to the user, but not ensure that multiple people are on the same page and willing to take the actions required to make a change happen. We have had project management software for a number of years, but the problems of coordination are largely human. AI brings this into sharper focus. 

The above skills will allow employees to make up for the shortfalls of AI systems and collaboratively be more productive. There is a different set of skills, which will be needed to navigate workplaces as they start to evolve more rapidly. 

Five years ago the government announced a large investment in teaching people to code - it was seen as the necessary skill for the next generation of workers. The ability of Generative AI to write high performing code throws this priority into doubt. Similarly other digital skills, particularly media related, seem to be well handled by AI. All employees whose job involves moving or manipulating information will find their jobs changing, however. 

The only possible response is one in which expectations change. Stability is no longer a given for roles and those who will thrive in this environment are employees who are best prepared to adapt. Resilience, openness to new information and change, adaptability - whilst harder to teach are likely, therefore, to be key characteristics of successful employees in a world where technology-caused change is more dominant than it has been in our working lives to this point.  

Are there roles, teams or organisations that seem particularly well placed in this area? 

Some industries and teams have already seen their work change dramatically with the rise of the internet and the digitisation of a number of industries, ranging from journalism to e-commerce. Teams in this area are probably already adept at the second set of these skills, having been forced to adapt many times in recent years. It is notable that many of those at the top of the field known as Digital Transformation, like our friends at Public Digital and those advising at the interface of exponential change & policy/commercial responses, like Azeem at Exponential View, had early professional success taking traditional media organisations into the digital age. Similarly I would point to consumer retailers, who have massively changed how they reach and serve consumers in recent years. 

For any company looking to build or adapt these systems - which in the scheme of technology are relatively easy to work with - the traditional pool of developers and Machine Learning experts will be valuable resources as the pivot from building traditional software, to LLM powered software is manageable at the moment whilst the technology is still very novel and nobody has more than a few months head start. 

Speaking of working with AI, what is your advice for avoiding harm that comes from the AI making mistakes? We know that there is bias in these systems and that they also produce incorrect information sometimes. 

The best approach will be to learn what specific tasks AI is useful for and where it shouldn’t be used. There is a big difference between asking an AI to summarise, say, a long document and giving it the CVs of three candidates and asking an AI system to choose between them. Keep in mind what the skills of these systems are. They have exceptionally broad general knowledge, but little to no situational awareness or knowledge of how things are done in your company; they are keen and tireless, willing to read through hundreds of pages of documents, but don’t have strong judgement, particularly in domain specific questions. In many ways they are like an exceptionally bright and hardworking new inexperienced employee. There are lots of tasks that you would give a new joiner, but they wouldn’t include making delicate judgement calls and probably wouldn’t involve publishing their work publicly without checking it first. This is a good heuristic to employ when using AI systems too. 

Leaders And Those They Manage

To what extent is it mostly junior or senior roles that you see being disrupted? Lots of the future core skills you talk about (judgement, relationships) are typically developed as individuals progress to more senior roles. How can companies manage this? 

This is definitely a key challenge we see for employers too. The training pathway for many lawyers currently involves many years of being in the detail of cases and statues, hunting for relevant information. Now, this seems exactly the sort of task that is likely to be automated away. It’s a similar story for copywriters, junior analysts and many more roles besides. The entry level work, which typically involves production, is also the work most readily automated. 

Over years of this work individuals are able to pick up a feel for deeper principals and trade offs, as well as phone books of contacts, which serve them well in senior roles. There really seems to be no substitute for experience other than… experience. Just like there is no way to fast track learning to ride a bike. 

For companies that seek to develop talent in house this is a key problem that needs solving or leaders doing succession planning in 5-10 years time will find that their employees haven’t done time at the coal face in the same way and might not have the necessary intuitive understanding to carry out the higher level functions of more senior jobs. 

What do leaders and managers need to be most mindful of when leading and managing teams where AI is likely to play a big part in how the team delivers what it is required to do?

I’ve just painted a negative picture in which AI deprives employees of valuable experience. But of course, there is another option, where the time that is freed up by automation is repurposed for dedicated training of employees. In this world it's easy to imagine how we could get more capable employees much earlier in their careers. It’s probably not necessary to do some routine tasks 100 times for an employee to pick out the important details - rather much more important is to see a variety of ways in which a similar task can differ so that all the possible variations are familiar in later years. If we could take these time savings and use them to give employees rigorous training in a dedicated manner to prepare them for leadership functions then that could lead to real productivity gains. 

Leaders will have a classic short vs long term dilemma though. 

I’ll give you a quick example from my previous life as a junior on a trading floor. As part of my training I was taught to price our financial products with a pencil and a calculator, because this was deemed to be the best way of teaching me how the mechanics of the trades worked. Once I started the regular day job, however, we had software that did all the calculations for us. This meant that whilst a 2nd year employee might have done 5 trades on a busy day, 20 years ago, because they had so much manual calculation, I was able to do 30, each of which was a very routine action of plugging information into a computer programme, rather than an intellectual exercise. The automation, rather than opening space for developing higher level skills in fact made the job more routine and less stimulating. In the short term this clearly looked like a productivity boost for teams, but the quality of work had decreased and the time it would take to reach seniority hadn’t changed, it was just a more tedious process to get there. I left and lots of my peers did the same. 

Leaders will need to be cognisant of these challenges as they introduce AI support tools. To what extent are they taking out rewarding challenges from the work environment and what is replacing it?  

Training and Culture 

It seems like this has all happened very suddenly. Do you agree? What were the early signals of this coming? Will change continue to be as fast?

Yes, definitely. There were signals. GPT-4, which is OpenAI’s most powerful model was being tested in Summer 2022. In Autumn 2022, when ChatGPT was released, people who follow this closely were already mind blown. I couldn’t understand why it didn’t break into the public consciousness until spring this year. It was one of the motivators behind starting Paradigm Junction, to help bring things that we were seeing in the world of science and technology to businesses in a way that worked for them. 

I think all bets have to be that change will continue to be just as fast, if not faster. There may be fewer jumps where something enters the consciousness so suddenly, but fundamentally there are a number of technologies that are combining with each other, in a very general way and spurring on progress in interrelated fields. AI is driving progress in synthetic biology and medicine, it's potentially helping drive innovation in energy too, with nuclear fusion seemingly getting closer and reports of a new superconductor just in recent weeks. If we get big breakthroughs in energy then that will feed back into computing with more power available for bigger models. The term that people are using is a new cluster of General Purpose Technologies, like we saw in the industrial revolution. It’s too early to say if that’s true but I will note that now, for the first time in history, a small group of engineers can build something and have it used by millions of people within weeks or months. Product changes are shipped to sometimes billions of people globally. That scale for action and change has never been seen before and is why I would continue to bet on technology posing strategic challenges to business in the coming years - and of course that is what we hope to help with. 

So how can organisations respond? 

I think the crucial first step is to ask yourself, how would we do things differently if we fully accepted that the pace of change in the world has increased. That will give you a guide, a north star, to the types of changes that make sense to be bringing in. When we work with organisations we do this in a few steps. 

  1. How Big? Are the threats to our business model existential, because they remove the value that we provide to our customers? Is there a significant disruption to the way we do core business? Or is this just a change to our way of working that will affect our sector broadly. 
  2. How Urgent? What can I not wait to act on? Often this is simply putting in place some policies around safeguarding of information and reassuring workers that there isn’t a backroom plot to build an AI system in secret and then replace them. In lots of industries stress levels are running very high. But also addressing any rapid and serious threats that were uncovered in the value mapping exercise from the first question. 
  3. How are we set up to move forward? This is continuing to evolve and we don’t yet know what products or services will be available even by the end of this year. That leaves us with key questions like: 
  • How do our decision making processes allow us to make quick decisions if required? 
  • Are we gathering information from people in our business spotting changes, and getting it to where it can be actioned? Amazon is a great example here, they surveyed every team in the business, hundreds of thousands of people, and condensed that down to a list of 67 new Generative AI products they wanted to offer, which then got leaked. 
  • How are we doing cheap, bounded innovation rather than spending on a big system that takes years to develop. 

But most of all, we encourage businesses we work with to ask: what are we learning? When the world is changing quickly this metric alone can be the difference between appropriate responses and inaction. 

Is there training that L&D teams should be looking at providing? What timeframe of change are we talking about and what should those responsible for L&D be doing now when they go back to their desks after this webinar?

Yes definitely. For this audience, with so much responsibility for both the skills and culture within organisations, I’d say there are answers to cover three horizons. 

In the immediate term, anything you can do to encourage your employees to use these tools safely will be a great source of learning about how they can help your business. Safety is clearly the first step of this and that means having in place clear policies about what information can and can’t be shared. But where those are able to be met - for example Microsoft have been very quick off the mark at putting these tools within their Enterprise business setup already - encourage teams to play and report back their findings. It's such a new technology that there really aren’t best practices, but instead there are millions of people finding ways to make painful little tasks easier. Where that learning can be captured, shared and incorporated into best practice there are potentially big productivity gains for companies and big wins for making employees feel valued and impactful. 

In the medium term, the coming months, People teams need to work with Corporate IT to understand which of the new AI features are going to be pushed at their employees. Microsoft and Google, the big office providers, are both testing whole suites of features. Microsoft has said Copilot will be widely available later this year. They will be making suggestions to people, in their working documents, about text and slides and emails and models. Particularly for companies with less technically confident workforces, helping them understand these changes will be crucial. So many workers are now expected to be fluent with email software, for example, that changes which are meant for the better could also be hugely disruptive. 

Finally, I think it is about having a plan over the next year to 18 months about the more structural challenges that are raised. Are you helping your workforce to adapt to changes? Are you making conscious decisions about what to do with employees who have more free time as AI has automated part of their job? Are you working with the workforce on the transition, because if the Customer Service research showed anything it's that in house AI systems pick up a lot of their training from best practice of the best employees in settings like call centres. It’s going to take months, if not years, of training and then testing before these systems are implemented at scale and it’s simply not going to be possible to do well if the change is resisted by employees. Staff really are the knowledge holders for a lot of the information that these AI systems need to be useful and persuading them to work alongside the development of these tools is going to be a big test. 

What are the 3 top pieces of advice you would give to organisations as they begin adapting to the change around them?

Can I say they should all just send us an email? 

More seriously, I think I’d say: 

  1. Set policies on using the tools that are already out there and make sure that your employees understand them.
  2. Get information from front line staff about how generative tools are helping them and dedicate time and effort to scanning for what new offerings there are to buy off the shelf 
  3. Start designing training for staff on how to make the most of these tools and engage them as change starts showing up. The reason I like the consultancy example is it is one which shows how powerful these tools can be augmenting workers' efforts, not replacing them and I think those kinds of conversations are important for building trust. 

If I was allowed a 4th - and it’s not something we have touched on much today but, to my mind, it is the biggest mistake we currently see companies making - I would add: spend at least some time thinking about what others can do to you with Generative AI, not just what you can do internally. The world we all work and live in is going to change and it's going to take concerted work at intelligence and strategy building to respond to that change, because it’s not something we can ignore. 

— 

Organisations across industries are grappling with the challenge posed by rapidly evolving technology and the consequent changes to working practices. Paradigm Junction help companies and public bodies to:

  • Stay abreast of developments in this fast paced environment 
  • Apply them to the specific context of your industry and organisation 
  • Turn this into concrete actions you can take to mitigate risks or seize a competitive advantage

For more information on how Paradigm Junction can help you & your business email james@paradigmjunction.com

Related posts

Buying Generative AI in Government

PUBLIC and Paradigm Junction have teamed up to build this practical guide for public buyer and suppliers to effectively navigate the process of 'Buying GenAI'. Examining critical factors of the procurement process - defining scope, securing finance, evaluating suppliers, defining IP, managing contracts - this report provides usable insights into what makes GenAI different and how Government can engage with it confidently.

Computers aren't supposed to be able to do that

18 months ago he would have been right. To me, it had stopped being remarkable they now can.

Introduction to Futures and Foresight for Emerging Technologies

No one is sure about how the capabilities of AI will develop, let alone how business, government and society will respond to them. But this doesn’t mean that you should stand still. Tools exist for helping decision makers make smart choices in the face of uncertainty about the future, where traditional forecasts are liable to falter.

Apple Vision Pro - Seeing The World Through A New Lens

The Vision Pro isn’t only an AR/VR play from Apple - it’s a first bid to equip us with the tools we will want to be using in the future world of work.