Selecting a genAI companion: Trust, yet validate

Selecting a genAI companion: Trust, yet validate

Enterprise executives,⁤ still enthralled by the possibilities of generative ⁤artificial intelligence (genAI),⁤ more often ⁣than not are ​insisting that their IT departments figure out ​how to‌ make the technology ⁤work.

Let’s set aside the usual concerns about genAI,⁢ such ⁣as the hallucinations ⁣and other errors⁤ that make it essential to‌ check every single line it ⁢generates (and obliterate any hoped-for efficiency boosts). Or that data leakage is inevitable and will be ​next to impossible to detect until it is too late. (OWASP has put together an impressive ⁢list ‍of the biggest IT threats from genAI and‍ LLMs in general.)

Logic and common‍ sense have not always been the strengths‍ of senior management ‍when ⁢on a mission. That means the IT ‌question will rarely be, “Should we ‍do GenAI? ⁢Does it make sense for‍ us?”⁣ It will be:​ “We‌ have been ordered to do it. What is the most cost-effective and​ secure way to ⁤proceed?”

With those questions in mind, I ⁣was intrigued ‍by an‍ Associated Press interview ⁣with AWS CEO Adam Selipsky — specifically this comment:⁤ “Most of our enterprise customers‍ are not going to build models. Most ⁣of them want ‌to​ use​ models⁢ that other people have built. The idea⁣ that one company ⁢is going ​to be supplying all the models in the world, I think, is‌ just not realistic. ‌We’ve​ discovered that customers need to ‌experiment and we are‍ providing‍ that ⁣service.”

It’s a valid argument ​and a⁢ fair summation of the thinking of many top executives. But​ should it be? The choice is‍ not merely buy versus ⁢build. ⁤Should‍ the enterprise create and manage its own ‌model? Rely on a big player (such as AWS, Microsoft or Google ⁣especially)? Or use one of ⁤the dozens of smaller ​specialty players in the GenAI arena?

It can be — ‍and probably should be — ⁣a combination of all three, depending on the enterprise and‍ its particular needs ‍and objectives.

Although‌ there ​are thousands of logistics and details to ⁣consider, the fundamental enterprise ⁣IT question ⁢involving‌ genAI developments and⁢ deployments‍ is simple: Trust.

The decision⁤ to use genAI has a lot⁤ of ⁤in common with the⁢ enterprise cloud ‍decision. In either case, a company is turning over much of its intellectual crown jewels (its most sensitive data) to a ⁤third party.‌ And ‌in‍ both⁣ instances, the third-party is trying‌ to offer as⁤ little visibility and control as possible.

In‍ the cloud, enterprise tenants are rarely if ever told of configuration or ​other settings changes that directly affect their ⁣data. (Don’t even dream about a​ cloud vendor asking the enterprise ‌tenant for permission to make those changes.)

With genAI, the similarities are obvious: How is my data being safeguarded? How are genAI ⁤answers safeguarded? Is ⁤our data ​training a model that will ⁢be used by​ our competitors? ⁤For that matter,⁤ how ⁣do I know exactly⁢ what the‌ model is being trained with?

As a‌ practical matter, this will be handled ⁣(or avoided) via contracts, which brings us back to the choice of…

2023-12-20 13:41:02
Original from www.computerworld.com ⁢ rnrn

Exit mobile version