SolarWinds: IT Staff Dubious on Organisation's AI Readiness

Organisations are racing to implement AI, but are they ready for it?
A recent trends report by SolarWinds reveals that very few IT professionals are confident in their organisation's readiness to integrate AI

IT professionals are eager to harness the power of AI, but are concerned over shortcomings in data quality, privacy and security.

The SolarWinds 2024 IT Trends Report, AI: Friend or Foe? found that very few IT professionals are confident in their organisation’s readiness to integrate AI due to limitations in data and infrastructure, as well as security concerns.

Notably, only 43% of respondents are confident their company’s databases can meet the increased needs of AI. Even fewer (38%) currently trust the quality of data or training used in developing AI technologies.

The report also cites privacy and security concerns as a top barrier to successful AI implementation. As a result, IT professionals state in the report that they are calling for increased government regulations to combat these concerns.

Security worries are stalling AI adoption

With current research estimating that generative AI (Gen AI) could add US$2.6tn to US$4.4tn of value to the global economy annually, if developed and deployed responsibly, the technology holds the potential to revolutionise numerous key industries including healthcare, manufacturing and retail.

In fact, Oliver Wyman Forum has estimated that generative AI (Gen AI) could add up to US$20tn to global GDP by 2030.

Some of the benefits that businesses hope AI will afford business operations include boosting productivity, freeing workers up for more complex tasks and improving overall efficiencies.

However, despite a near-unanimous desire to adopt AI technology, very few respondents to the SolarWinds report currently have confidence in their organisation’s readiness to integrate AI, pointing to limitations in data and infrastructure and security concerns.

In order to be used to its full potential, there is a global responsibility to ensure that AI is developed safely. This means training models on wide datasets to avoid AI bias, in addition to harnessing the technology for positive use cases, rather than to support cybercriminal activities.

The report unveiled significant insights into IT professionals' perspectives on AI, including:
  • 38% use AI to make IT operations more efficient
  • Today’s IT teams see AI as an advisor (33%) and a sidekick (20%) rather than a solo decision-maker
  • 41% respondents said they’ve had negative experiences with AI. Of those, privacy concerns (48%) and security risks (43%) were most often cited as the reasons why
  • More than half of respondents also believe government regulation should play a role in combating misinformation

“While talk of AI has dominated the industry, IT leaders and teams recognise the outsize risks of the still-developing technology, heightened by the rush to build AI quickly rather than smartly,” comments Krishna Sai, SVP, Technology and Engineering at SolarWinds. 

“With the proper internal systems in place and by prioritising security, fairness, and transparency while building AI, these technologies can serve as a valuable advisor and co-worker to overworked teams, but this survey shows that IT pros need to be consulted as their companies invest in AI.”

The importance of enterprise AI ethics 

In the current business climate, cyberattacks are continuing to rise and irreversibly impact businesses around the world. In fact, it is predicted that these criminal activities are set to cost the world US$9.5tn in 2024 alone, suggesting that businesses will benefit from improving their understanding of AI moving forward.

As a result, sentiment reflects cautious optimism concerning AI, according to SolarWinds. According to the report, 46% of IT professionals want their company to move faster in implementing AI despite costs, challenges and concerns, but only 43% are confident that their company’s databases can meet the increased needs of AI.

Significantly, even fewer (38%) trust the quality of data or training used in developing AI technologies. 

To ensure successful and secure AI adoption, SolarWinds report respondents recognise that organisations must develop thorough policies on ethics, data privacy and compliance. Notably, the report found that more than one-third of organisations (35.6%) still do not have policies in place to guide AI implementation.

Keeping data safe is more important than ever before, particularly during the age of AI. As businesses consider how AI technology may impact them, preparing for risks could lead to greater trust building with customers and greater data protection compliance.

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

UK Takes Steps to Strengthen Country's Cyber Security

The new government have made cybersecurity one of their top priorities as they lay out their plans for what they intend to do in power

BlueVoyant Launch Platform to Tackle Supplier Attack Surface

BlueVoyant has unveiled a new Cyber Defense Platform which aims to tackle the growing attack surface introduced by the ecosphere of third-party vendors

Irdeto’s Andrew Bunten Talks Securing Online Content Streams

With online streaming services being bigger than ever, Irdeto’s Andrew Bunten explains how they manage to keep streams safe despite the huge attack surface

Fortinet Cyber Survey Shows Global Scope of Skills Gap

Operational Security

What ChatGPT Passing an Ethical Hacking Exam Means for Cyber

Technology & AI

Learn How CTEM can Upskill Your Cyber Strategy

Network Security