I found the Republican National Debate this week fascinating because, for once, it seemed to involve actual issues instead of regurgitated talking points. One comment caught my attention: it was an attack by former New Jersey Governor Chris Christie on Vivek Ramaswamy implying he was a ChatGPT clone.
This wasn’t true, of course, but would that necessarily be a bad thing?
The critique goes to the struggle education is having about whether the use of ChatGPT is somehow cheating and it mirrors similar concerns that using that generative AI tools for work are also cheating. (This brought back a memory of being a kid on a farm, painting miles of fence surrounding the property but being told I couldn’t use a spray paint rig because that made the work fun — and work shouldn’t be fun. I disagreed with then and still do now.)
The view that using gen AI means you’re cutting corners — because you should be doing everything the old way — is simply wrong-headed. Let me explain.
Breaking ‘the-way-it’s-always-been-done’ habit
One of my frustrations when I first worked at a multi-national company was that, when you came up with an innovative way to fix an otherwise unfixable problem, colleagues would tell me, “That’s not the way it is done here” or some variant thereof. Whenever you wanted to try something different to overcome an obstacle, there was no end of people coming up with historic reasons why you couldn’t do so.
There are always risks with doing anything differently. Innovation for its own sake is foolish, because if there is an established way that works, reinventing the wheel only adds risk. But if you need to innovate to accomplish something, preventing that innovation just assures failure.
Generative AI has massive potential to allow us to do more things, more quickly, while — if the model is trained properly — still assuring the same or even better quality. So why wouldn’t we use it for things we do infrequently, like debating or interviewing?
Using gen AI for debate
I was present when IBM (one of my clients) had a debate using its Watson AI. Watson lost, but the definition of victory in that case was highly subjective. What seemed strange was that Watson seemed more human than the human debater. Watson used humor and voice inflection, and while that clearly didn’t help it in the debate, giving a political leader those qualities would be an advantage. People who haven’t rehearsed enough can come off as rigid and emotionally empty, a problem Watson didn’t have.
And how much “debate” is there in the president’s job? They only seem to debate at election time — much as people in the workforce only do interviews when changing jobs; our interview skills don’t necessarily have much to do with our job skills. Interviewing and being interviewed is a skill. But other than media training, which few get, we don’t train most employees in public speaking. Even when we do, we…
2023-08-25 23:24:03
Source from www.computerworld.com rnrn