Language Models at Oscar
Updated 12.01.23
How Oscar is structured, and how we are applying LLMs throughout that structure.
It is hard to “waterfall” development around language models.
On a daily basis, new research or new language models are published that could be applied to Oscar’s stack. And yet, both the fundamental capabilities and the limitations of language models aren’t fully understood by anyone, not even theoretically. This is why we chose to take an iterative approach: generate lots of use cases, test fast, build a library of what works, learn a lot, share a lot.
It's not enough for just tech to have a command of the possibilities for AI, you need the entire company to have at least some familiarity. We've organized workshops and roadshows to unearth AI-driven opportunities in the far reaches of the business.
Everyone in the tech org (and every non-technical team who wants it) has access to a HIPAA-compliant Oscar-internal web app with test data and a gRPC service that talk to the OpenAI models - so everyone can test out ideas, immediately, from radical ideas to small but clever productivity hacks (like using AI to upgrade or deprecate Python libraries).
We’ve set up an agile but robust AI governance process to ensure we’re launching these features responsibly. This includes guiding teams on ways to measure and maximize the accuracy and performance of LLMs for their use case, and assessing regulatory requirements and ethical considerations.
We’ve decentralized development in each team, but have a centralized list of ideas, centralized prompt library, centralized discussion channel and knowledge sharing (like in this rapid fire run through of use cases). By centralizing our efforts, everyone can refine from what others are learning. We prefer to over-share.
With that said, let’s publish.
Internally, Oscar tech, operations and business development is organized into what we call the “Oscar Program Structure”. It is the most systematic answer to the question: what are all the components you need to execute well on to manage health risk? All Oscar tech sits somewhere in this tree structure. Click into the structure, and find out how our LLM use cases support those drivers. This is a live document which will keep getting updated as we launch new use cases, pilots and features. Keep checking back and give us feedback & ideas.
Oscar Program Structure
0 AI use cases
01: Developing Products
How we package healthcare into products, price them and implement them in our systems.
2 AI use cases
02: Growing
How we grow, accelerate our go-to-market and get our products to as many members as possible.
4 AI use cases
03: Making the Healthcare System Usable
How we assemble the components of a useful care delivery system.
7 AI use cases
04: Shaping Care
How we reduce healthcare costs and improve outcomes by nudging everyone towards the best care.
2 AI use cases
05: Managing Care
How we put guardrails around care delivery through authorizations and active management.
4 AI use cases
06: Enabling Integrated Care
How we help change the nature of care delivery by moving care out of the office and into alternative channels, such as virtual care.
2 AI use cases
08: Improving What We Know
How we add to our understanding of how healthcare works, and make the system better for all stakeholders.
0 AI use cases
09: Managing Capital
Health insurers require large amounts of capital to back them. Manage that responsibly.