Trending...
- RAS AP Consulting Advances to RFP Stage in Heidelberg Materials' SAP Vendor & Customer Master Data Modernization Initiative
- Expert E-Bike Safety Advocate Issues Urgent Warning Following Recent Southern California Fatalities
- Spokane Police are investigating a collision on West Airport Drive
Memory that shows its work. Every recall traces back to a stored memory and a knowledge-graph edge. Bring your own model. Plug in via MCP, REST, or official SDKs.
SEATTLE - Washingtoner -- Lumetra today announced the general availability of Engram, a memory layer for AI agents. After a year of invitation-only beta access, Engram is opening its doors to all developers.
TL;DR
What Engram Does
An agent that talked with a user last week recalls their preferences today, and surfaces the exact stored memory and graph edge that produced the answer. Engram ingests conversation, extracts atomic facts and relationships, and stores them where they can be retrieved semantically and explained structurally.
Retrieval fuses three signals: keyword (BM25), semantic vector search, and traversal of the knowledge graph. Recall doesn't fail when a question is rephrased or when the answer depends on an implicit connection between memories.
More on Washingtoner
For developers, that means memory stops being a black box. The recall path is inspectable end-to-end: every retrieved fact is grounded in a stored memory; every connection is grounded in a graph edge. If a recall is wrong, you can see why.
Memory That Shows Its Work
Memory products that bundle inference and hide the retrieval path require trusting the vendor about what their system did and why. Engram's design choice is the opposite: the system shows its work. This is the difference between a memory product and memory infrastructure.
Bring Your Own Model
Engram is BYOM by default. Developers connect their preferred LLM (frontier, open-source, or self-hosted), and Engram handles extraction, storage, and retrieval. No inference lock-in, no markup on tokens you could have bought directly.
Plug In Anywhere
Engram launches with three integration paths:
Pricing
Engram launches with usage-based pricing. No per-seat, no per-project surcharges:
Paid tiers meter only on memories stored and retrievals served. There are no per-call inference fees that scale with your success, and no per-token surcharges layered on top of the model you already pay for.
More on Washingtoner
A Year Behind Closed Doors
Engram spent the past year in invitation-only beta with design partner NeonBay (neonbay.ai), running in production while the team hardened the retrieval pipeline, ingest path, and recall quality. Today's launch opens that same system to every developer.
Quotes
"MCP changed the math on memory. Once a client speaks MCP, adding long-term memory is a config change instead of a rewrite. We built Engram MCP-native from day one because we think that's where the ecosystem is going."
Ben Meyerson, Co-Founder, Lumetra
"Most memory products are black boxes. You hand over your data and trust that the right thing comes back. Engram is built so every recall points to a stored memory and a graph edge. If you don't like an answer, you can see exactly where it came from."
Jacob Davis, Co-Founder, Lumetra
About Lumetra
Lumetra builds memory infrastructure for AI agents. Founded in 2025 by Ben Meyerson and Jacob Davis (previously on AWS IoT at Amazon Web Services), the company is headquartered in Seattle, WA. Engram is its first product.
Start free: https://lumetra.io
Documentation: https://lumetra.io/docs
LongMemEval methodology and results:https://lumetra.io/engram-on-longmemeval
TL;DR
- 91.6% on LongMemEval-S (458/500) out of the box. Methodology and results published openly.
- Every recall is auditable: semantic retrieval plus an automatically maintained knowledge graph, so you can see which memory and which edge produced an answer.
- BYOM by default: frontier, open-source, or self-hosted models. No inference lock-in.
- Plug in three ways: MCP server, REST API, or official TypeScript, Python, and Go SDKs.
What Engram Does
An agent that talked with a user last week recalls their preferences today, and surfaces the exact stored memory and graph edge that produced the answer. Engram ingests conversation, extracts atomic facts and relationships, and stores them where they can be retrieved semantically and explained structurally.
Retrieval fuses three signals: keyword (BM25), semantic vector search, and traversal of the knowledge graph. Recall doesn't fail when a question is rephrased or when the answer depends on an implicit connection between memories.
More on Washingtoner
- KT Medical Staffing Expands Concierge Nursing and Private Duty Nursing Services in Orange County
- The Millennium Alliance Achieves Great Place To Work® Certification™ Amid Continued Growth
- The Millennium Alliance Appoints Former Adweek Executive Eric Hayden Shakun as Chief Financial Officer to Accelerate Next Phase of Growth
- North Puget Sound League Launches New Player Development Academy (PDA) Tryouts
- T. Jones Group Named Finalist Across Multiple Categories at the 2026 Georgie Awards
For developers, that means memory stops being a black box. The recall path is inspectable end-to-end: every retrieved fact is grounded in a stored memory; every connection is grounded in a graph edge. If a recall is wrong, you can see why.
Memory That Shows Its Work
Memory products that bundle inference and hide the retrieval path require trusting the vendor about what their system did and why. Engram's design choice is the opposite: the system shows its work. This is the difference between a memory product and memory infrastructure.
Bring Your Own Model
Engram is BYOM by default. Developers connect their preferred LLM (frontier, open-source, or self-hosted), and Engram handles extraction, storage, and retrieval. No inference lock-in, no markup on tokens you could have bought directly.
Plug In Anywhere
Engram launches with three integration paths:
- The MCP server works with Claude.ai web, Claude Desktop, Claude Code, Cursor, Windsurf, Codex, ChatGPT, and OpenClaw out of the box.
- The REST API provides standard HTTP endpoints for ingest, query, memory management, and usage stats.
- Official SDKs are available for TypeScript (@lumetra/engram on npm), Python (lumetra-engram on PyPI), and Go.
Pricing
Engram launches with usage-based pricing. No per-seat, no per-project surcharges:
- Free is for evaluation and hobby projects.
- $29 per month covers indie developers and small teams.
- $99 per month covers production workloads.
- Enterprise is custom and includes dedicated support.
Paid tiers meter only on memories stored and retrievals served. There are no per-call inference fees that scale with your success, and no per-token surcharges layered on top of the model you already pay for.
More on Washingtoner
- The Simplest Small Business You're Probably Not Thinking About
- San Francisco Writer Wins Webby Award, Internet's Highest Honor, for Website Based on her Novel
- EDC Weekend Comedy Special Featuring Don Barnhart & Friends — Use Promo Code FRIEND for 50% Off
- N Y S E: OTH Off The Hook YS Is Building a Vertically Integrated Marine Empire — And Investors Are Starting to Notice
- Concierge Title Agency Merges with Independence Title, Inc. to Deliver an Expanded Concierge Closing Experience Across South Florida
A Year Behind Closed Doors
Engram spent the past year in invitation-only beta with design partner NeonBay (neonbay.ai), running in production while the team hardened the retrieval pipeline, ingest path, and recall quality. Today's launch opens that same system to every developer.
Quotes
"MCP changed the math on memory. Once a client speaks MCP, adding long-term memory is a config change instead of a rewrite. We built Engram MCP-native from day one because we think that's where the ecosystem is going."
Ben Meyerson, Co-Founder, Lumetra
"Most memory products are black boxes. You hand over your data and trust that the right thing comes back. Engram is built so every recall points to a stored memory and a graph edge. If you don't like an answer, you can see exactly where it came from."
Jacob Davis, Co-Founder, Lumetra
About Lumetra
Lumetra builds memory infrastructure for AI agents. Founded in 2025 by Ben Meyerson and Jacob Davis (previously on AWS IoT at Amazon Web Services), the company is headquartered in Seattle, WA. Engram is its first product.
Start free: https://lumetra.io
Documentation: https://lumetra.io/docs
LongMemEval methodology and results:https://lumetra.io/engram-on-longmemeval
Source: Lumetra, LLC
0 Comments
Latest on Washingtoner
- Tacoma City Council Restricts Unauthorized Use of Public Property for Civil Immigration Enforcement
- Spokane Police investigate shooting in north Spokane and make an arrest
- People & Stories/Gente y Cuentos Welcomes Two New Trustees as Organization Enters 54th Year and Expands Community Reach
- Tacoma: City Manager Hyun Kim Details 'Roadmap to Recovery' Addressing the City's General Fund Deficit and Modernizing City Operations
- With a Dream and a Team, Monalisa Okojie Is Empowering the Next Generation Through EXPOSE NGO
- Spokane: DUI Driver Taken Into Custody After Attempting to Flee from Officers
- Tacoma Police Department to Recognize Five Tacoma Public School Employees Who Intervened in Violent Assault
- American Properties Realty, Inc. Celebrates 2026 FAME Awards - Community of the Year - Heritage at South Brunswick
- Spokane City Council Approves Activation of Public Spaces Program
- Mel Blackwell to Keynote 2026 NSSF Marketing and Leadership Summit
- SmartCone and Samsung Launch RoadDefender™ to Enhance Real-Time Safety for Roadside Workers
- The Personal Development Industry Has a Blind Spot Says Global Personal Success Guru Omar L. Harris
- Kevin "Mr. Wonderful" O'Leary Begins New Universal Coin & Bullion Promotion of Gold and Silver
- Flamingo Compliance Launches Schengen Area Trip Planning Tools as New Digital Border Controls Take Effect
- HHS Announces Major Push to Address Psychiatric Drug Risks: CCHR Applauds Focus on Informed Consent and Safe Tapering
- PhaseZero Launches Eight AI Agents for Manufacturers and Distributors - Connecting Sales, Support, and Operations Teams Across Full Commerce Journey
- @tickerbitcoinbb and @girl_still_cute Announce the Arrival of SPROTO AEON BABY 1.0 – A New Chapter for the HarryPotterObamaSonic10Inu Universe
- Michigan Fitness Foundation Gifts EPEC Moves K–5 PE Curriculum Program to Educators during Michigan Moves Month
- Sidow Sobrino, the One and Only World's No.1 Superstar®, Launches Dangerous Joy
- Tacoma: City Manager Hyun Kim to Present 'Roadmap to Recovery' on May 12