AI Infrastructure

Simplismart secures $7M funding to empower AI adoption

funding, growth
0

OpenAI is projected to generate over $10 billion in revenue next year, a clear sign that the adoption of generative AI is accelerating. Yet, most companies struggle to deploy large AI models in production. With the steep costs and complexities involved, nearly 90% of machine learning projects are estimated never to make it to production. Addressing this pressing issue, Simplismart recently announced a $7 million funding round for its infrastructure that enables organizations to deploy AI models seamlessly. Like the shift to cloud computing, which relied on tools like Terraform and mobile app development fueled by Android, Simplismart is positioning itself as the critical enabler for AI’s transition into mainstream enterprise operations.

The series A funding round was led by Accel with participation from Shastra VC, Titan Capital, and high-profile angels, including Akshay Kothari, Co-Founder of Notion. This tranche, more than ten times the size of their previous round, will fuel R&D and growth for their enterprise-focused MLOps orchestration platform.

The company was co-founded in 2022 by Amritanshu Jain, who tackled cloud infrastructure challenges at Oracle Cloud, and Devansh Ghatak, who honed his expertise on search algorithms at Google Search. In just two years, with under $1m in initial funding, Simplismart has outperformed public benchmarks by building the world’s fastest inference engine. This engine allows organizations to run machine learning models at lightning speed, significantly boosting performance while driving down costs.

Simplismart’s fast inference engine allows users to leverage optimized performance for all their model deployments. For example, Its software-level optimization helps run Llama3.1 (8B) at an impressive throughput of >440 tokens per second. While most competitors focus on hardware optimisations or cloud computing, Simplismart has engineered this breakthrough in speed within a comprehensive MLOps platform tailored for on-prem enterprise deployments – agnostic towards choice of model and cloud platform.

“Building generative AI applications is a core need for enterprises today. However, the adoption of generative AI is far behind the rate of new developments. It’s because enterprises struggle with four bottlenecks: lack of standardized workflows, high costs leading to poor ROI, data privacy, and the need to control and customise the system to avoid downtime and limits from other services,” said Amritanshu Jain, Co-Founder and CEO at Simplismart.

Simplismart’s platform offers organizations a declarative language (similar to Terraform) that simplifies fine-tuning, deploying, and monitoring genAI models at scale. Third-party APIs often bring concerns around data security, rate limits, and utter lack of flexibility, while deploying AI in-house comes with its own set of hurdles: access to computing power, model optimisation, scaling infrastructure, CI/CD pipelines, and cost efficiency, all requiring highly skilled machine learning engineers. Simplismart’s end-to-end MLOps platform standardizes these orchestration workflows, allowing the teams to focus on their core product needs rather than spending numerous manhours building this infrastructure.

Camunda announces new AI and automation capabilities

Previous article

Why additional application-level protections are needed | SIOS Technology

Next article