Gen AI Archives - TechInformed https://techinformed.com/tag/gen-ai/ The frontier of tech news Mon, 02 Sep 2024 16:05:16 +0000 en-US hourly 1 https://i0.wp.com/techinformed.com/wp-content/uploads/2021/12/logo.jpg?fit=32%2C32&ssl=1 Gen AI Archives - TechInformed https://techinformed.com/tag/gen-ai/ 32 32 195600020 Why the AI productivity revolution should enhance, not replace the workforce https://techinformed.com/why-the-ai-productivity-revolution-should-enhance-not-replace-the-workforce/ Mon, 02 Sep 2024 16:05:16 +0000 https://techinformed.com/?p=25515 It’s been well documented that, since the 2008 financial crisis, productivity in the UK has stagnated, failing to regain the upward momentum that once fuelled… Continue reading Why the AI productivity revolution should enhance, not replace the workforce

The post Why the AI productivity revolution should enhance, not replace the workforce appeared first on TechInformed.

]]>
It’s been well documented that, since the 2008 financial crisis, productivity in the UK has stagnated, failing to regain the upward momentum that once fuelled economic prosperity.

Despite advances in technology, the anticipated growth in workplace efficiency has not materialised. However, the tide may be turning with the emergence of artificial intelligence. According to a widely published report by Workday, AI has the potential to unlock an astounding £119 billion in annual productivity across UK enterprises.

But as promising as AI is, it’s crucial to recognise that it is not a silver bullet. AI can significantly enhance productivity, but business leaders must approach its adoption with a comprehensive, responsible strateg

 Empowerment not replacement

 

There is a common misconception that AI will lead to widespread job losses, replacing human workers with machines but AI should be viewed as a productivity enhancer rather than a job eliminator.

In the same way Microsoft tools have become indispensable in modern workplaces, AI can take over mundane, repetitive tasks, freeing up employees to focus on more meaningful, impactful work. This shift allows workers to engage in activities that require creativity, problem-solving, and human interaction — areas where AI cannot compete.

The UK’s productivity gap — 24% lower than it would have been if pre-2008 trends had continued — highlights the need for innovative solutions. AI presents a unique opportunity to close this gap by automating routine processes, reducing errors, and enabling faster decision-making. However, to realise this potential, AI must be integrated thoughtfully into the workplace, with an emphasis on enhancing human capabilities rather than replacing them.

While the potential of AI is clear, its adoption has been slow, primarily due to concerns over safety, privacy, and bias. These fears are not unfounded, as the deployment of AI in business processes comes with risks that need to be carefully managed.

Trust in AI is critical for its successful implementation. Employees and business leaders alike need to be confident that AI systems are reliable, transparent, and aligned with business goals.

To build this trust, businesses must prioritise responsible AI strategies. This involves more than just implementing the latest technologies; it requires a commitment to transparency, explainability, and continuous education. Employees should be well-informed about how AI systems work, what data they use, and how decisions are made. This transparency is key to dispelling fears and ensuring that AI is seen as a supportive tool rather than a threat.

Leadership-drive AI

 

AI alone is not enough to drive the productivity gains the UK needs. Business leaders must take a proactive role in guiding their organisations through the AI revolution. This starts with a clear analysis of the specific efficiencies AI can deliver and the development of a transparent strategy for its adoption. Leaders must also address the cultural barriers to AI integration, such as resistance to change and lack of trust.

Five best practices to protect your data privacy while using Gen AI

Moreover, employee motivation and engagement are critical to unlocking the full potential of AI. Unengaged employees are the biggest barrier to productivity. By leveraging AI to handle routine tasks, employees can focus on work that is more fulfilling and aligned with their skills, leading to higher engagement and, ultimately, greater productivity.

The UK stands on the brink of a significant productivity shift, with AI poised to play a central role. However, AI should not be viewed as a panacea. It is a powerful tool that can enhance productivity, but it must be implemented alongside thoughtful leadership, clear communication, and a commitment to building trust. By approaching AI adoption responsibly, businesses can not only improve productivity but also create a more motivated and engaged workforce. This balanced approach will be key to navigating the future of work and ensuring that AI serves as an enhancer, not a replacement, of human potential.

The post Why the AI productivity revolution should enhance, not replace the workforce appeared first on TechInformed.

]]>
25515
A coffee with…Jason Hill, UK CEO, Reply https://techinformed.com/a-coffee-withjason-hill-uk-ceo-reply/ Thu, 22 Aug 2024 12:29:49 +0000 https://techinformed.com/?p=25238 Starting out as a tech support worker, Jason Hill  has accumulated 30 years’ experience in the IT and consulting industry, with the last 15 spent… Continue reading A coffee with…Jason Hill, UK CEO, Reply

The post A coffee with…Jason Hill, UK CEO, Reply appeared first on TechInformed.

]]>
Starting out as a tech support worker, Jason Hill  has accumulated 30 years’ experience in the IT and consulting industry, with the last 15 spent as a tech leader at Reply, a global network of companies specialising in enterprise-based business solutions.

TechInformed met Hill in June, at the London leg of Reply Xchange, an annual multi-territory set of events that bring together IT professionals, creative thinkers, and tech enthusiasts to explore the role of technology in reshaping industries.

Among the Reply clients present at this year’s event were HSBC, Schroders, easyJet, UK Ministry of Defence and car maker Aston Martin.

Gen AI dominated this year’s ReplyXchange. In terms of your clients, how many are at the PoC stage with this technology, and how many are using it on live projects?

We’re at the end of the beginning with Gen AI – it’s no longer considered something scary.

While not every customer is doing something, if you saw the numbers on my chart so far this year, just for the UK alone we’ve had 500 requests for pure AI. We will do around 5,000 projects this year, so even if 10% of these were pure AI and we didn’t take any more requests… it’s still a significant amount. I think by end of the year it will be around 25% pure AI projects, while it will influence others.

Where do you see Gen AI use cases dominating?

One of our customers is applying AI across their contracts which has led to savings in tens of millions.

When you look at the characteristics of an AI project the first involves consolidating data into one place; the second is then putting a model on top of that to understands what this data means. So, the customers that have done the first piece and have applied data on top of a large data set, they are the ones that a really going hard on this.

With sectors such as banking and finance, there seems to be a friction between governance and regulation when it comes to AI….

What we must look at is how to better understand what some of these things do using AI, and if we can understand them, then we can regulate them. If we regulate them, we have compliance and if we have compliance then we can have resilience.

While there is some trepidation in financial services, we are certainly seeing some customers in this area look at AI ‘reg tech’ and how they can understand really what’s happening.

One of the regulations, the BCBS 329, for instance, focusses on data lineage – understanding where the data has come from – that’s a great use case for AI because its quite a manual process – things hop from different systems or go through black box algorithms that we don’t understand. So, we have some pilots involved with how we can apply AI to understand data lineage.

In terms of use cases, which sectors are embracing Gen AI?

Certainly, Customer Service and Sales. They are the two largest because they are the most obvious use cases.

When we look at CX, one of the challenges is that human capital is expensive. But that’s not the real problem – it’s that we offshored it , which had some success in reducing cost but then involved producing scripts and so what we did was take the intelligence out of CX because its expensive in the form of people.

What AI and LLMs allow us to do is put that intelligence back in so we can take the drudgery out of CX, but more importantly, because we can hook up some of the data that the agent wouldn’t have acquired before they received a blind call, they are able to help customers more.

And then with sales. One of our customers in automotive – I did hear a stat on our dashboard – that the drop out rate of car purchases has fallen by 35%. That doesn’t mean they are selling 35% more cars, but 35% more journeys are finishing end-to-end with Gen AI whereas, before some customers would drop out along the way, so it is reducing friction.

Aston Martin dealership
Aston Martin is using Gen AI to help target the most convertable deals in its pipeline

 

The Aston Martin case study which we saw there today, is focussed on the AI propensity model and how Gen AI is helping it target which deals in its pipeline are likely to convert.

Shield Reply is also doing some interesting work in defence, generating war scenarios…

Yes, that project is focussed on the digitisation of military scenarios for operational use/ practice. Some are done in the field; some are done in the classroom. Classic scenarios include rescuing hostages, defending airspaces and setting up exclusion zones.

As a training leader you typically need to change the scenarios that people play, and the planning cycle takes more time than the playing time  – so we’re trying to flip this on its head.

Each model will learn from itself but also the planners are still able to build their own models because there’s  a lot of data and a lot of variables.

Do you see areas where AI is going to create new job roles?

We’re moving from governance roles to stewardship roles with AI. There’s a subtle difference. With governance you are trying to control a process or regulation but for stewardship roles you are thinking more about taking care of something and shepherding projects through various processes. How will these things work? And when they do work, who is taking care of it? And who is responsible for driving the right motivations?

Stewardship roles are not classic IT management either. It’s taking a slightly different approach of how you want to manage something. It’s an interesting set of skills that people will need to develop. But they are very human skills.

Would these ‘AI stewards’ necessarily have to come from a tech background?

It depends on the context. In the call centre someone’s got to manage the way someone responds to text, emails and IBRs. That level of curation doesn’t need a tech background. But under the hood if you are looking at cloud and data and LLMs you’ve got to be techy. The idea that AI is simple isn’t true. All we really do with technology is abstract away, so it becomes more usable for more people. But we’ve still got all the complexity underneath. But things are getting more technical rather than less technical.

If you have a conversational interface –  fine –  but how do we check what it’s doing and whether it integrates and shares and is compliant in the right way?

What do you do to switch off from work?

I’ve two sons and right now I’ve been helping them with exam revision. I’ve been doing GCSE computer science and A Level Business Studies and Chemistry.

Don’t all the kids use ChatGPT to help with homework now?

We’re not encouraging it. The school where they go to has an AI test so that they can test whether it was written by an AI or not.

What else do you do to relax?

Run, walk the dog, do the housework and cook. I’ve also taken up yoga recently – that’s probably the only thing I do where I  have to switch off for half an hour without thinking about something popping into my head – mainly because I’m not very good at it – I have to concentrate, or I worry I might break something!

 

 

The post A coffee with…Jason Hill, UK CEO, Reply appeared first on TechInformed.

]]>
25238
UK mulls post Brexit bots to negotiate free trade deals https://techinformed.com/uk-mulls-post-brexit-bots-to-negotiate-free-trade-deals/ Thu, 22 Aug 2024 09:46:14 +0000 https://techinformed.com/?p=25226 The UK government is considering using AI chatbots to help negotiate post-Brexit free trade deals as well as a range of other tasks, according to… Continue reading UK mulls post Brexit bots to negotiate free trade deals

The post UK mulls post Brexit bots to negotiate free trade deals appeared first on TechInformed.

]]>
The UK government is considering using AI chatbots to help negotiate post-Brexit free trade deals as well as a range of other tasks, according to a blog post penned by AI and data ethics experts at the Department of Business and Trade.

Identifying topics and trends in Free Trade Agreements text to assist negotiators is one of several tasks that civil servants hope will help with productivity.

In the blog post, the department’s AI data ethics lead, James McBride, and data ethics manager, Emma Taylor, wrote that the ideas were being assessed for potential data protection and cybersecurity issues.

Twenty-eight submissions were received through the AI governance framework in total.

Other tasks that the civil service hope to use AI on to co-assist include global trade forecasting; audio transcription of ministerial interviews and reviewing job descriptions.

According to the blog, submissions have been split approximately 2:1 between various forms of generative AI tools and more traditional machine learning or Natural Language Processing (NLP) approaches.

ChatGPT was overwhelmingly the most suggested generative AI tool, though a range of more specialised tools were mentioned in submissions for more focused applications.

The blog added that none of the department’s plans for AI would “involve automated decision-making about individuals”.

The blog confirmed that submissions that include the use of generative AI are set to undergo further scrutiny to investigate risks unique to generative AI models, like hallucination and privacy concerns.

Government invests £100m in AI investment and regulation 

Commenting on this development Javvad Malik, lead security awareness advocate at KnowBe4 was reassured that the tech was ulitmately being used as a data filter, with human beings at the helm.

He said: “With AI prone to hallucinations (making up facts), the potential of information being leaked, or mistakes being copied verbatim, it is vital that people don’t become complacent and outsource all their work to AI.

“Rather, we need to not lose sight of the human element. AI in its current form is nothing beyond an assistant to the human mind, not a replacement. The department’s emphasis on using AI to filter data, with ‘experienced officials then making a final assessment,’ underlines a crucial balance between machine efficiency and human ingenuity,” he added.

Elsewhere in the UK’s Civil Service, the Cabinet Office is at the advanced trial stage of its Redbox Copilot project – which has followed the same governance framework that the department of business and trade is currently using.

Redbox- named after the red briefcases used by ministers to carry official papers – is designed to search and analyse government papers and rapidly summarise them into briefings.

The post UK mulls post Brexit bots to negotiate free trade deals appeared first on TechInformed.

]]>
25226
A coffee with… Grant Caley, UK and Ireland Solutions Director, NetApp https://techinformed.com/gen-ai-formula-one-data-management-coffee-with-netapp/ Wed, 14 Aug 2024 12:48:27 +0000 https://techinformed.com/?p=25044 Grant Caley, director for UK and Ireland at data infrastructure firm NetApp, has been at the company for over two decades, witnessing firsthand the evolution… Continue reading A coffee with… Grant Caley, UK and Ireland Solutions Director, NetApp

The post A coffee with… Grant Caley, UK and Ireland Solutions Director, NetApp appeared first on TechInformed.

]]>
Grant Caley, director for UK and Ireland at data infrastructure firm NetApp, has been at the company for over two decades, witnessing firsthand the evolution of technology customers and their needs.

Storage giant NetApp serves diverse clients, including public sector organisations, banks, US government departments, energy companies, and even Formula 1 teams.

Caley, who nowadays opts for a white Americano, started his technology career working with databases in the NHS and as a technical designer at IBM.

The self-professed gamer, who owns a Steam deck and VR headset, also touched on AI’s impact on customers’ sustainability and cybersecurity goals, the intricacies of working with motorsport data, and valuable lessons from the recent CrowdStrike Windows outage.

Five lessons from the CrowdStrike Windows IT outage

 

What has motivated you to stay at NetApp for 24 years?

The culture at NetApp has always been great. It’s constantly evolving with technological changes like AI and cloud computing. I’ve transitioned through various roles, starting in pre-sales in the UK, moving to a global role in pre-sales, and travelling the world for about seven years.

Eventually, I shifted to a chief technologist role in the UK, then into pre-sales management, and now I’m the pre-sales director. The variety and the great culture have kept me here.

 

Have you felt the effects of increasing use of generative AI?

We’re seeing a lot of interest and projects starting to spin up around generative AI. However, it’s still new territory for many customers who are figuring out how to use and integrate it with private data securely. The challenge is to use this technology efficiently while keeping data secure and sustainable.

 

Which industries are handling it the best?

It’s varied. Gen AI has applications across numerous fields — customer experience, programming, research, technical writing, and more. No single industry stands out because many are finding diverse and innovative uses for it. It’s like a Swiss Army knife; you need to figure out the best way to use it for your specific needs.

 

How do you address sustainability concerns regarding the use of AI?

AI relies heavily on data, which requires significant storage and processing power. We certainly talk to a lot of companies and advise them on optimising their data to use less infrastructure, which in turn uses less power and cooling.

One big mistake is just putting all of your data into generative AI — you don’t necessarily need to because a lot of it could be junk or irrelevant. Tidying up your data before feeding it in and using less data means less infrastructure, less cooling, and less energy.

 

Is data security a major consideration when exposing data to Gen AI models?

It does introduce new challenges. You’re exposing private and public data to these models. You need to think of the ‘cyber resilience wrapper’ that goes around this because using so much data makes your firm a target for cybercriminals.

Companies have to consider many additional considerations in that respect, not just using data but also securing it and ultimately making it recoverable.

If generative AI becomes critical to business operations and companies do get attacked, any loss could massively impact regulations. So, they must be fully secure and recoverable.

 

How did the CrowdStrike incident illustrate the risks of relying too heavily on a single vendor for IT security?

The CrowdStrike incident was interesting because it highlighted the reliance on single vendors for specific tasks within IT infrastructure. The FCA and the UK, for example, are introducing many rules about over-reliance on the cloud and warning financial services to be aware of this.

The DORA regulation has also arrived in Europe, making sure companies build protection mechanisms around that. If there is an outage in an environment, they’ve got the capability to fail over somewhere else and recover to keep on running to avoid these instances.

You can argue that the CrowdStrike incident was a wake-up call for many companies. Still, it will mean that we’ll start to see companies diversifying where they put their data and what technologies they use around it to ensure that they’re not reliant on single vendors.

 

How do motorsports teams exemplify strong data management practices?

We sponsor Aston Martin F1 and Porsche’s Formula E, and we’ve sponsored Ducati in Moto GP — I suppose someone on our marketing team must love racing.

Data in those sports is critical because every car and motorbike nowadays is almost like a mobile sensor array. They’re capturing video feeds and sensor feeds in real-time. All of that comes to the trackside, is manipulated to give the driver advice, and is then also passed to the design centre for onward processing and analytics.

Data is a huge driver of motorsports, and that’s one reason we work quite closely with those companies.

Although it doesn’t seem like it, you’d be amazed at some of the sustainability technologies they’re building into these cars. I think it’s good because those technologies will be in our cars five to six years from now, once they become commercialised.

Data comes into play when determining how much infrastructure you need to optimise. That’s a big part of what we do to help them be sustainable, at least on the infrastructure side.

 

How do you wind down and switch off at the end of a long day?

I’m a gadget geek. I enjoy playing with new technologies. My wife and I also enjoy walking with our two dogs in the Lake District, which is a great way to relax.

I’m into all kinds of gadgets, from smart home devices to gaming consoles like the Steam Deck and VR headsets. I can’t resist new and interesting tech.

 

A coffee with Andy Wilson, Dropbox, Vodcast Episode

The post A coffee with… Grant Caley, UK and Ireland Solutions Director, NetApp appeared first on TechInformed.

]]>
25044
Five best practices to protect your data privacy when implementing Gen AI https://techinformed.com/five-best-practices-to-protect-your-data-privacy-when-implementing-gen-ai/ Tue, 06 Aug 2024 15:01:17 +0000 https://techinformed.com/?p=24820 Gen AI is becoming increasingly popular, with many companies integrating it into their operations to enhance efficiency and innovation. Furthermore, a McKinsey & Company survey… Continue reading Five best practices to protect your data privacy when implementing Gen AI

The post Five best practices to protect your data privacy when implementing Gen AI appeared first on TechInformed.

]]>
Gen AI is becoming increasingly popular, with many companies integrating it into their operations to enhance efficiency and innovation.

Furthermore, a McKinsey & Company survey shows more companies are using AI across multiple business functions — half of respondents reported adoption in two or more areas in 2024, up from less than a third in 2023.

Similarly, according to Statista, almost 11%  of employees working at global firms have tried using ChatGPT in the workplace at least once.

However, this widespread adoption brings new security challenges, particularly regarding data privacy. For example, of those who used ChatGPT at work, almost 5% have put confidential corporate data into the AI-powered tool.

In fact, nearly one-third of employees have admitted to placing sensitive data into GenAI tools, making data leaks a top concern.

According to a report by AI security solutions provider Hidden Layer, more than three-quarters of companies either using or exploring AI have experienced AI-related security breaches.

How are businesses using Gen AI?

 

A study by Harmonic Security titled GenAI Unleashed found that Gen AI was used by employees to upload data to 8.25 apps on average every month.

The study found that content creation, summarising, and editing were overwhelmingly popular among workplace users, with around 47% of prompts asking apps for help in those areas.

They were followed by software engineering (15%), data interpretation, processing, and analysis (12%), business and finance (7%) and problem-solving/troubleshooting (6%).

The most popular Gen AI tool by far was ChatGPT, used by 84% of users — 6 times more popular than Google Gemini (14%), the next most popular tool.

Alastair Paterson, co-founder and CEO of Harmonic Security, explains, “With a choice of over 5,000 GenAI apps and a high number of average apps used by employees, there are too many out there for IT departments to properly keep track of using existing tools. We particularly urge organisations to pay attention to apps that are training on customer data.”

How can companies ensure data privacy when using Gen AI tools?

 

TechInformed consulted industry experts to compile a list of best practices for safeguarding data privacy in the era of Gen AI; here are our top tips.

  1. 1. Avoid inputting personal or sensitive information into Gen AI LLMs

 

Generative AI tools and large language models (LLMs) can store and repurpose data provided to them. To prevent unauthorised access, avoid inputting personal or proprietary information into these tools.

Sebastian Gierlinger, VP of Engineering at Storyblok, says, “The biggest threat we are aware of is the potential for human error when using generative AI tools to result in data breaches. Employees sharing sensitive business information while using services such as ChatGPT risk that data will be retrieved later, which could lead to leaks of confidential data and subsequent hacks.”

He says the solution could be as simple as educating employees about how to use tools like ChatGPT safely.

Sebastian Gierlinger, VP of Engineering at Storyblok
Sebastian Gierlinger, VP of Engineering, Storyblok

 

That said, Leanne Allen, head of AI at KPMG UK, adds that “there are security measures that can remove sensitive or personal data automatically from prompts before they are used by a generative AI model. These measures can help mitigate the risk of data leaks and breaches of legally protected information – especially since human error will likely still occur.”

Leanne Allen, KPMG, Data Privacy & Gen AI
Leanne Allen, head of AI, KPMG

 

  1. 2. Create and enforce an AI & Privacy policy

 

A comprehensive company policy on AI usage and data privacy can help mitigate many risks associated with Gen AI tools.

Angus Allan, senior product manager at CreateFuture, says, “Establishing a clear AI policy from the outset can streamline this entire process by enabling businesses to tailor controls to their risk tolerance and specific use case.”

Allan stresses the importance of tailoring any policy to the specific company and addressing how AI will be uniquely leveraged for that industry and use case.

“An AI policy not only pre-empts data privacy risks but also sets clear expectations reduces ambiguity, and empowers teams to focus on solving the right problems,” he says.

“In an era of GDPR and increased regulatory scrutiny of AI, it’s imperative for every business to get these basics right to minimise data risks and protect customers.”

Angus Allan, CreateFuture, on Data Privacy and Gen AI
Angus Allan, senior product manager at CreateFuture

 

  1. 3. Manage data privacy settings

 

Most Gen AI tools have features that allow users to disable data storage. Employees should navigate to the tool’s settings and disable such features to prevent company data from being used for AI model training.

Patrick Spencer, VP of corporate marketing at Kiteworks, explains, “A typical disablement feature looks something like this: navigate to Settings and, under Data Control, disable the “Improve Model for Everyone” option. Regularly review permissions to prevent unnecessary data access, ensure privacy, and thwart unauthorised access.”

Deleting chat histories in AI tools can also reduce the risk of sensitive information being stored, he says.

“OpenAI typically deletes chats within 30 days; however, their usage policy specifies that some chats can be retained for security or legal reasons. To delete chats, access the AI tool’s settings and find the option to manage or delete chat history.”

He adds this should be done periodically to maintain data privacy and minimise vulnerabilities.

Patrick Spencer, Kiteworks, on Data Privacy & AI
Patrick Spencer, VP of corporate marketing at Kiteworks

 

  1. 4. Regularly change passwords; or ditch them altogether

 

When using passwords, they should be long, complex, and unique for each account, including those linked to AI systems. However, CEO of cybersecurity startup Teleport, Ev Kontsevoy, advocates for moving away from passwords altogether.

He details, “Every enterprise housing modern infrastructure should cryptographically secure identities. This means basing access not on passwords but on physical-world attributes like biometric authentication and enforcing access with short-lived privileges that are only granted for individual tasks that need performing.

Cryptographic identities consist of three components: the device’s machine identity, the employee’s biometric marker, and a PIN. Kontsevoy says businesses can significantly reduce the attack surface threat actors can exploit with social engineering tactics by using them.

“If you need a poster child for this security model, it already exists, and it’s called the iPhone. It uses facial recognition for biometric authentication, a PIN code, and a Trusted Platform Module chip inside the phone that governs its ‘machine identity.’ This is why you never hear about iPhones getting hacked.”

Ev Kontsevoy, CEO, Teleport, on passwords and cryptographic indetities
Ev Kontsevoy, CEO of cybersecurity startup Teleport

 

  1. 5. Disconnect your systems from the internet

 

Tony Hasek, CEO and co-founder of cybersecurity firm Goldilock offers a unique solution: physical network segmentation, the ability to connect and disconnect networks at the press of a button.

“Through a hardware-based approach, physical network segmentation enables users to segment all digital assets, from LLMs to entire networks, remotely, instantly and without using the internet,” he says.

He adds that businesses can reduce the level of sensitive data exposure by rethinking which parts of their networks they keep online and moving away from an “always-on” model.

“Companies who are building their own internal large language models (LLMs) in-house are essentially creating a repository for their company’s most valuable data and intellectual property, including customer and employee data, trade secrets, and product strategies. This makes LLMs and other Gen AI models a prime target for cybercriminals.”

He concludes, “Keeping Gen AI models offline until they are needed to generate a response is a critical step in ensuring the valuable data they contain is kept safe, and physical network segmentation can ensure networks can switch from online to offline seamlessly.”

Tony Hasek, CEO, Goldilock, on physical network segmentation for data privacy
Tony Hasek, CEO and co-founder of cybersecurity firm Goldilock

 

Now that you’ve handled security and data privacy, you can find out how to lead the adoption of Gen AI in your enterprise (when half of all uptake is happening outside the IT department) — read more here.

The post Five best practices to protect your data privacy when implementing Gen AI appeared first on TechInformed.

]]>
24820
Navigating the rollout of Gen AI in enterprise https://techinformed.com/navigating-the-democratisation-of-generative-ai/ Mon, 08 Jul 2024 14:25:31 +0000 https://techinformed.com/?p=24211 It’s no secret that businesses are turning to generative AI to enhance productivity, efficiency, and scalability. Yet very few could have predicted the speed and… Continue reading Navigating the rollout of Gen AI in enterprise

The post Navigating the rollout of Gen AI in enterprise appeared first on TechInformed.

]]>
It’s no secret that businesses are turning to generative AI to enhance productivity, efficiency, and scalability. Yet very few could have predicted the speed and scale of adoption – namely that the use of generative AI within business processes would grow by an astonishing 400% in 2023.

This growth becomes even more poignant in the context of increasing pressure on the rollout of generative AI in the UK. Last month, a new report from the Communications and Digital Committee called on the UK government to adopt a more positive vision for AI – with specific references to generative AI – to reap the social and economic benefits and enable the UK to compete in this area globally.

However, many UK businesses are already deep into their generative AI rollout strategies and aligning with the global trend of rapid adoption. With this, leaders are having to navigate unforeseen challenges related to internal ownership and governance. Historically, IT departments would be central to any technological rollout but when it comes to generative AI, the internal driving force is not the team you may expect.

 

Everyone is automating their work, regardless of technical skill

 

New research from the Work Automation Index 2024 reveals an unprecedented ‘democratisation’ of generative AI within businesses. Put simply, the hype around generative AI has prompted individuals within organisations to automate their work processes proactively.

As a result, the sheer volume of applications and processes within individual companies is rapidly increasing. Alongside this growth, there is a rise in both the number and variety of automation tools. While each new tool pledges to minimise fragmentation and revolutionise the enterprise, this ‘patchwork’ approach has exacerbated fragmentation. Instead of dismantling existing silos, UK businesses are inadvertently constructing new ones.

 

Understanding the democratisation of generative AI

 

The democratisation of generative AI is largely driven by the rise of low-code, no-code technology which has given employees the capabilities and confidence to automate their processes regardless of technical background.

The research found that nearly half (44%) of all automated processes are now built outside of IT. Employees no longer have to wait for the assistance of an IT specialist who would typically need to write lines of code to add a new search field to an internal database. Instead, employees across all departments of the business are empowered to introduce automation themselves.

There is a caveat, however: without a strong system of governance, scaling automation with generative AI can quickly become anarchy instead of a democracy. This is because automated processes with generative AI are growing more complex, requiring more steps than ever before. There will also be mixed levels of sophistication between internal departments, leading to discrepancies in security, scalability, change controls, and compliance which ultimately increases business risk.

This risk is the reason behind IT departments taking on a ‘player-coach role’: 56% of automations are still built by IT personas, but IT is also being tasked with governance and guidance for the 44% handled by other teams within the business.

 

The value of taking a holistic approach

 

Whilst generative AI doesn’t follow traditional business patterns of implementation, there are many lessons to be learned from the successful rollout of other technologies. Typically, when organisations approach various types of automation, they start with narrow-scope business challenges and test the benefits and pitfalls before moving forward. With generative AI, there is much less willingness to stagger the rollout, with multiple departments making progress at different speeds all with different needs.

To maximise the potential of generative AI, the CIO, and broader IT team need to become the guiding voice. If the CIO has clear sight of all the various stages of generative AI rolling out across the business, the necessary guidance and parameters around security, scalability, change controls, and compliance can be provided.

For new projects, IT can help the business take a more holistic view and encourage departments to look at the end-to-end processes of adopting AI and automation as opposed to having a short-sighted, task-oriented focus. By prioritising projects which have larger-scale benefits, rather than sporadic experimental use cases, there is huge potential for businesses to get more out of the technology.

Similarly, there are principles around growth, process, and scale that should be followed. These principles apply to automation generally but have relevance for generative AI in particular. For example, the process will be optimised when companies automate end-to-end processes rather than individual tasks. Meanwhile, companies with the right growth mindset will strive to embrace change and challenges in their processes, rather than build rigid, unchanging automations. Finally, companies must also establish the scale mindset; this requires embracing democratisation of data, allowing both business and IT teams to automate.

This new era with generative AI demands holistic leadership and the willingness to dismantle existing silos to pave the way for transformative change. By thinking differently about AI and automation, businesses are better placed to stand out from the competition and tap into their digital transformation journeys.

The post Navigating the rollout of Gen AI in enterprise appeared first on TechInformed.

]]>
24211
IBM and HCLTech launch Gen AI centres in US, UK and India https://techinformed.com/ibm-and-hcltech-launch-gen-ai-centres-in-us-uk-and-india/ Tue, 02 Jul 2024 13:56:26 +0000 https://techinformed.com/?p=24049 IBM and global digital transformation specialist HCLTech have teamed up to offer clients a Generative AI Centre of Excellence based at sites in the US,… Continue reading IBM and HCLTech launch Gen AI centres in US, UK and India

The post IBM and HCLTech launch Gen AI centres in US, UK and India appeared first on TechInformed.

]]>
IBM and global digital transformation specialist HCLTech have teamed up to offer clients a Generative AI Centre of Excellence based at sites in the US, UK and India.

The centre will be based on IBM’s watsonX AI and data platform and made available through HCL’s AI and Cloud Native Labs situated in Noida, London, New Jersey and Santa Clara.

The enterprise tech giants say the aim of the centres is to help companies on their Gen AI journey so they can experiment with use cases, reduce coding complexity and improve skills development.

The companies have also pledged to train 10,000 engineers and architects in IBM’s innovative AI technologies on the watsonx platform to help skill their resources and provide a platform for building use cases.

Clients of both companies will be offered access to several education and training resources on a range of IBM’s watson-based technologies including: watsonx.ai; watsonx.data; watsonx.governance;  watsonx Code Assistants;  watsonx Orchestrate and watsonx Assistant.

Stephen Smith, general manager, service partners, IBM ecosystem, emphasised that driving adoption of responsible generative AI solutions was an important component of its collaboration with service partners such as HCLTech.

“Through this Center of Excellence, we plan to empower our joint clients to rapidly explore, experiment and engineer generative AI solutions with watsonx that are designed to meet their current business challenges,” he added.

Alan Flower, EVP, global head, AI & Cloud Native Labs, HCLTech said that the IT specialist plans to embed watsonx in its Gen AI suit of solutions, HCLTech AI Force to support code modernisation.

He added: “This expansion of our work with IBM will facilitate rapid exploration of AI’s potential as we create highly differentiated HCLTech offerings using the latest IBM technology.”

Last month CRM giant Salesforce launched its first AI centre in London to encourage customer and partner innovations and upskilling.

The post IBM and HCLTech launch Gen AI centres in US, UK and India appeared first on TechInformed.

]]>
24049