June 22, 2023

June 22, 2023

Mass adoption of generative AI tools is derailing one very important factor, says MIT

Many companies "were caught off guard by the spread of shadow AI use across the enterprise," Renieris and her co-authors observe. What's more, the rapid pace of AI advancements "is making it harder to use AI responsibly and is putting pressure on responsible AI programs to keep up." They warn the risks that come from ever-rising shadow AI are increasing, too. For example, companies' growing dependence on a burgeoning supply of third-party AI tools, along with the rapid adoption of generative AI -- algorithms (such as ChatGPT, Dall-E 2, and Midjourney) that use training data to generate realistic or seemingly factual text, images, or audio -- exposes them to new commercial, legal, and reputational risks that are difficult to track. The researchers refer to the importance of responsible AI, which they define as "a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact."


From details to big picture: how to improve security effectiveness

Benjamin Franklin once wrote: “For the want of a nail, the shoe was lost; for the want of a shoe the horse was lost; and for the want of a horse the rider was lost, being overtaken and slain by the enemy, all for the want of care about a horseshoe nail.” It’s a saying with a history that goes back centuries, and it points out how small details can lead to big consequences. In IT security, we face a similar problem. There are so many interlocking parts in today’s IT infrastructure that it’s hard to keep track of all the assets, applications and systems that are in place. At the same time, the tide of new software vulnerabilities released each month can threaten to overwhelm even the best organised security team. However, there is an approach that can solve this problem. Rather than looking at every single issue or new vulnerability that comes in, how can we look for the ones that really matter? ... When you look at the total number of new vulnerabilities that we faced in 2022 – 25,228 according to the CVE list – you might feel nervous, but only 93 vulnerabilities were actually exploited by malware. 


3 downsides of generative AI for cloud operations

While we’re busy putting finops systems in place to monitor and govern cloud costs, we could see a spike in the money spent supporting generative AI systems. What should you do about it? This is a business issue more than a technical one. Companies need to understand how and why cloud spending is occurring and what business benefits are being returned. Then the costs can be included in predefined budgets. This is a hot button for enterprises that have limits on cloud spending. The line-of-business developers would like to leverage generative AI systems, usually for valid business reasons. However, as explained earlier, they cost a ton, and companies need to find either the money, the business justification, or both. In many instances, generative AI is what the cool kids use these days, but it’s often not cost-justifiable. Generative AI is sometimes being used for simple tactical tasks that would be fine with more traditional development approaches. This overapplication of AI has been an ongoing problem since AI was first around; the reality is that this technology is only justifiable for some business problems.


Pros and cons of managed SASE

If a company decides to deploy SASE by going directly through SASE vendors, they’ll have to configure and implement the service themselves, says Gartner’s Forest. “The benefits of a managed service provider are a single source for all setup and management, the ability to redeploy internal resources for other tasks, and the ability to access skills and capabilities that don’t exist internally,” he says. Getting in-house IT staff with the right expertise to handle SASE can be a real challenge, particularly in today’s hiring climate: 76% of IT employers say they’re having difficulty finding the hard and soft skills they need, and one in five organizations globally is having trouble finding skilled tech talent, according to a 2023 survey by ManpowerGroup. The access to outside experts is particularly appealing to companies that don’t have the resources to manage SASE themselves. Managed SASE providers have specialized expertise in deploying and managing SASE infrastructure, says Ilyoskhuja Ikromkhujaev, software engineer at software developer Nipendo. “Which can help ensure that your system is set up correctly and stays up to date with the latest security features and protocols,” he says.


The security interviews: Exploiting AI for good and for bad

AI has moved beyond automation. Looking at large language models, which some industry experts see as representing the tipping point that ultimately leads to wide-scale AI adoption, Heinemeyer believes that an AI capable of writing code offers attackers the opportunity to develop much more bespoke and tailored, sophisticated attacks. Imagine, he says, highly personalised phishing messages that have error-free grammar and no spelling mistakes. For its customers, he says Darktrace uses machine learning to learn what normal looks like in business email data: “We learn exactly how you communicate, what syntax you use in your emails, what attachments you receive, who you talk to, and when this is internal or external.We can detect if somebody sends an email that is unusual for you.” A large language model like ChatGPT reads everything that is on the public internet. The implication is that it will be reading people’s social media profiles, seeing who they interact with, their friends, what they like and do not like. Such AI systems have the ability to truly understand someone, based on the publicly available information that can be gleaned across the web. 


Switching the Blame for a More Enlightened Cybersecurity Paradigm

The “blame the user” mentality is a cognitive bias that ignores the complexities of human-computer interaction. Research in cognitive psychology and human factors engineering has shown that humans are not designed to be perfect digital operators. Mistakes are a natural part of our interaction with systems, especially those that are complex and non-intuitive. Moreover, our susceptibility to scams and manipulation is not just a personal failing, but a product of millennia of evolution. For instance, social engineering attacks exploit our natural tendency to trust and cooperate, which have been crucial to human survival and societal development. To put the onus on the individual is to ignore the broader context. Shifting the blame is an easy way out. It absolves organizations of the responsibility to address systemic issues and allows them to maintain the status quo. This is underpinned by the “just-world hypothesis,” a cognitive bias which propounds that people get what they deserve. When an employee falls for a scam, it's easy to assume that they were careless or ill-prepared.

Read more here ...
CHESTER SWANSON SR.

Next Trend Realty LLC./wwwHar.com/Chester-Swanson/agent_cbswan

1y

Thanks for the updates on, The Today's Tech Digest 😀 👍 🙌 🙏 🙂 😊.

KRISHNAN N NARAYANAN

Sales Associate at American Airlines

1y

Thanks for posting

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics