Hazelcast has advanced its platform by integrating AI and vector search capabilities, leveraging large language models (LLMs) and embedding models for new use cases like semantic search, caching, and chatbots. In this episode of Let’s Talk, Anthony Griffin, Chief Architect at Hazelcast, emphasizes the importance of data distribution, resilience, and availability in its core platform, and is excited about the potential impact of this technology on the industry.
Highlights of the show:
- Hazelcast’s new vector search capability and its potential applications in AI workloads.
- Griffin discusses his background at AWS and why he joined hazelcast, and the company’s announcement of vector solid capability.
- Griffin discusses hazelcast’s new vector storage and distributed vector store, which opens up new use cases like semantic search and integration with large language models.
- Hazelcast is looking at AI and generative AI, with chief scientist Steve Weston leading the effort, and customers are also interested in using it for data-intensive applications.
- Griffin discusses Hazelcast’s new vector search feature, which integrates with large language models and enhances AI workloads.
- Hazelcast aims to solve pain points in unstructured data processing by layering vector search on top of its core in-memory store.
Guest: Anthony Griffin (LinkedIn)
Company: Hazelcast (Twitter)
Show: Let’s Talk





