Palo Alto-based startup Glean, founded in 2019 by former Google, Microsoft and Meta employees, has released a new generative-AI based assistant, dubbed Glean Chat, designed to boost productivity and efficiency across enterprises via a conversational search interface.
Defining Glean Chat — an add-on to the company’s namesake enterprise search product —as the “Power BI of unstructured data,” CEO and founder Arvind Jain said that the generative AI assistant is targeted at helping employees find information across an enterprise’s applications and content repositories quickly and efficiently, with source citations.
Glean Chat offers an experience very similar to OpenAI’s ChatGPT, but limited to an enterprise’s content and resource boundaries, Jain said. When a user makes a natural language-based query, the company’s search technology uses APIs to check all the content and activity — including information in applications — pertaining to the query before storing it in a customer’s cloud environment. The data stored is then fed to large language models (LLMs), which have been trained on that particular enterprise’s data, to generate the search or query result.
The query result contains links to source information from documents, conversations and applications.
How Glean Chat is structured
Glean is built on five layers consisting of infrastructure, connectors, a governance engine, the company’s knowledge graph, and an adaptive AI layer, according to the company.
In order to connect to an enterprise’s applications and content repositories, Glean Chat uses its self-developed connectors to link to applications and data sources such as Salesforce, Zendesk, Jira, GitHub, Slack, Figma, Workday, Okta, Outlook, OneDrive, Google Drive, Box, Dropbox, SharePoint, as well as storage offerings from AWS, Google Cloud, and Microsoft among others.
The governance layer ensures that the generative AI follows an enterprise’s set boundaries and security policies such as identity and access management, the company said.
The knowledge graph layer, which the company has developed over the last few years, understands relationships between content and employees and internal language in an enterprise, Jain said, adding that “this enables Glean to recognize nuances like how people collaborate, how each piece of information relates to another, and what information is most relevant to each user.”
The knowledge graph layer is trained on an enterprise’s data along with large language models once it becomes a Glean subscriber, according to Jain.
The adaptive AI layer uses the information from the knowledge graph and runs it through LLM embeddings for semantic understanding and large language models for generative AI, the company said. LLM embeddings are vectors or arrays that are used to give context to artificial intelligence models, a process known as grounding. This process allows enterprises to avoid having to…
2023-06-17 03:00:03
Original from www.computerworld.com rnrn