Disclosure: Intel is a client of the author.
At Intel’s Innovation conference this past week, the company highlighted the next generation of Windows PCs, clearly anticipating Microsoft’s genAI Copilot tool, which can write documents for you, create presentations from comments, and automate much of what annoys everyone about Outlook. Intel offered up a number of interesting scenarios about this new class of hardware — due in December — that has the potential to transform work and entertainment.
Intel CEO Pat Gelsinger and Rich Uhlig, director of Intel Labs, had a lot to point to at the event.
AI and education (in China)
While there might be issues with this in the US, given the sensitivity here about educational content, Rich Uhlig highlighted a project in China that uses Intel’s new AI technology. It’s an AI-driven touchscreen display that has a camera and can capture the interaction between teachers and students. As AI-based tools learn what works and what doesn’t, it can help coach educators on best practices and build up the capability to work autonomously to tutor or mentor kids. As it is here in the US, teachers are spread thinly in China, so students often don’t get the personal attention they need for the best education.
Automation could create AI-driven mentors that not only help children one-on-one but can also help teachers become better at their jobs and more effective. This could also be useful in occupational training and even post-hire training for new employees by reducing the load on workers who would rather be doing their job instead of training someone else to do it.
This could have broad applications for training that don’t exist today.
Dealing with sound-challenged environments
Gelsinger, who is hearing impaired, he presented new AI-driven hearing aid technology that goes beyond hearing aids and can adapt, based on conditions. For instance, if you are in a Zoom meeting, it would pull audio from Zoom and block out ambient noise. If someone approached, the user could block the Zoom audio and switch to local sound — all the while automatically transcribing the meeting so the user doesn’t fall behind. It would also do real-time translation, which goes beyond useful in actually understanding what people are saying when you don’t speak their language.
A future capability that already exists in a product coming to market involves optimizing the sounds in a noisy environment. I’d find this useful because I have a really tough time hearing someone in an acoustically challenged venue. I used to work in construction, and this would have been a godsend by preventing some avoidable injuries.
AI and entertainment
When I travel, I like to watch videos and listen to music. Gelsinger demonstrated how AI tools could create unique content tailored to each individual user. For instance, if you like Taylor Swift’s sound but are tired of her lyrics about ex-boyfriends, AI could create a song…
2023-09-24 10:24:02
Source from www.computerworld.com rnrn