According to reports, Apple is improving the generative artificial intelligence (AI) capabilities of its mobile devices.
To achieve that goal, the company has started hiring dozens of positions to work on large language models (LLMs), according to the Financial Times on Sunday (Aug. 6).
According to the job posts, Apple is working on “ambitious long-term research projects that will impact the future of Apple and our products.” According to the source, the lists indicate that Apple is concentrating on delivering technology like LLMs exclusively to mobile, whereas competitors such as Google have produced AI products such as chatbots.
“We view AI and machine learning as core fundamental technologies that are integral to virtually every product that we build,” bCEO Tim Cook stated this week during an earnings call.
According to the FT story, the company’s third-quarter R&D spending was $3.1 billion higher than the same quarter in 2022, which Cook attributes to its generative AI initiatives.
LLM technology, as PYMNTS reported last week, is bringing AI “to new heights by expanding its capabilities beyond text to include images, speech, video, and even music.”
As corporations create LLMs, they must also deal with the obstacles of gathering and categorizing massive volumes of data, as well as comprehending the nuances of how models now operate and how they differ from the prior status quo.
“Technology giants such as Alphabet and Microsoft, as well as investors such as Fusion Fund and Scale VC, are investing in LLMs and forming partnerships,” PYMNTS stated. “The task facing technology businesses and investors is significant. It entails ensuring that their LLM protégés collect and train enormous data sets, referred to as parameters, and fine-tune them so that they execute and deliver desired outputs or results.”
With all this innovation and investment comes regulation, as governments around the world endeavor to better understand AI.
According to University of Pennsylvania Law School Professor Cary Coglianese, doing so may be easier said than done.
“Trying to regulate AI is a little bit like trying to regulate air or water,” the professor, who also serves as the founding head of the Penn Program on Regulation, said.
True, these things are already controlled across the world, but — like AI — they have unique traits that necessitate novel approaches to monitoring. According to Coglianese, controlling AI will be a multidimensional task that will vary based on the type of algorithm and how it is employed.
“It’s not just one thing. “Regulators — and I do mean that plural, we will need multiple regulators — they must be agile, flexible, and vigilant,” he added, adding that “a single piece of legislation” will not solve the difficulties associated with AI.
Must read:Jeffrey Celavie AI