Pick a language:

Get Help
Get Help The Oso engineering team is available to help you as you build out authorization: Guidance on authorization best practices Answering questions on how to use Oso Authorization model and design reviews We’re available in Slack, along with hundreds of other developers working on authorization. You can also set up a 1x1 with an Oso engineer. For customers who want an SLA on support, we provide paid support. This includes: Production support (24x7x365) Emergency hotfixes Security reviews Production readiness preparation For more information on paid support, drop us an email.
Upgrading to 0.20

Migration guide for upgrading from Oso 0.15 to the new 0.20 release.

Releases
Releases Changelogs for released versions.
Performance
Performance This page explores the performance of Oso across three main axes: 1. In practice. How does Oso perform under typical workloads? 2. Internals and Micro-benchmarks. How is Oso built? What are the micro-benchmarks? 3. Scaling. What is the theoretical complexity of a query? In Practice There are two main areas to consider when measuring the performance of Oso queries: the time to evaluate a query relative to a policy, and the time needed to fetch application data. In a complex policy, the time it takes to run a single query depends on the complexity of the answer. For example, a simple rule that says anyone can “GET” the path “/” will execute in less than 1 ms. On the other hand, rules that use HTTP path mapping, resource lookups, roles, inheritance, etc. can take approximately 1-20 ms. (These numbers are based on queries executing against a local SQLite instance to isolate Oso’s performance from the time to perform database queries.) The time needed to fetch application data is, of course, dependent on your specific environment and independent of Oso. Aggressive caching can be used to reduce some of the effect of such latencies. Profiling Oso does not currently have built-in profiling tools, but this is a high-priority item on our near-term roadmap. Our benchmark suite uses Rust’s statistical profiling package, but is currently better suited to optimizing the implementation than to optimizing a specific policy. Oso has a default maximum query execution time of 30s. If you hit this maximum, it likely means that you have created an infinite loop in your policy. You can use the Polar debugger to help track down such bugs. For performance issues caused by slow database queries or too many database queries, we recommend that you address these issues at the data access layer, i.e., in the application. See, for example, our guidance on The “N+1 Problem”. Internals and Micro-benchmarks The core of Oso is the Polar virtual machine, which is written in Rust. (For more on the architecture and implementation, see Internals.) A single step of the virtual machine takes approximately 1-2 us, depending on the instruction or goal. Simple operations like comparisons and assignment typically take just a few instructions, whereas more complex operations like pattern matching against an application type or looking up application data need a few more. The debugger can show you the VM instructions remaining to be executed during a query using the goals command. The current implementation of Oso has not yet been aggressively optimized for performance, but several low-hanging opportunities for optimizations (namely, caches and indices) are on our near-term roadmap. We do ensure that all memory allocated during a query is reclaimed by its end, and our use of Rust ensures that the implementation is not vulnerable to many common classes of memory errors and leaks. You can check out our current benchmark suite in the repository, along with instructions on how to run it. We would be delighted to accept any example queries that you would like to see profiled; please feel free to email us at engineering@osohq.com. Scaling At its core, answering queries against a declarative policy is a depth-first search problem: nodes correspond to rules, and nodes are connected if a rule references another rule in its body. As a result, the algorithmic complexity of a policy is in theory very large — exponential in the number of rules. However, in practice there shouldn’t be that many distinct paths that need to be taken to make a policy decision. Oso filters out rules that cannot be applied to the inputs early on in the execution. What this means is that if you are hitting a scaling issue, you can make your policies perform better by either by splitting up your rules to limit the number of possibilities, or by adding more specializers to your rule heads. For example, suppose you have 20 different resources, ResourceA, ResourceB, …, and each has 10 or so allow(actor, action, resource: ResourceA) rules. The performance of evaluating a rule with input of type ResourceA will primarily depend on those 10 specific rules, and not the other 190 rules. In addition, you might consider refactoring this rule to allow(actor, action, resource: ResourceA) if allowResourceA(actor, action, resource). This would mean there are only 20 allow rules to sort through, and for a given resource only one of them will ever need to be evaluated. The performance of evaluating policies is usually independent of the number of users or resources in the application when fetching data is handled by your application. However, as in any programming system, you need to be on the lookout for linear and super-linear searches. For example, if you have a method user.expenses() that returns a list of the user’s expenses, the check expense in user.expenses() will require O(n) VM instructions, where n is the length of the list. It would be better to replace the linear search with a single comparison, e.g. expense.user_id = user.id. Be especially careful when nesting such rules. Summary Oso typically answers simple authorization queries in less than 1 ms, but may take (much) longer depending on the complexity of your rules, the latency of application data access, and algorithmic choices. Some simple solutions such as caching and refactoring may be used to improve performance where needed.
Security
Security This page is split into two sections with two distinct purposes: Security best practices for using Oso, and Our approach to building a secure product Security Best Practices Policy Authoring To reduce the likelihood of writing logic bugs in Oso policies, we recommend using support for specializers as type checks wherever possible. For policy errors that are most likely due to incorrect policies, such as accessing attributes that don’t exist, Oso returns hard errors. Some problems are reported as warnings, when the problem may be a logic bug. An example of this is singletons (unused variables). We additionally recommend the use of Inline Queries (?=) as simple policy unit tests. Since Oso is accessible as a library, you should test authorization as part of your application test suite. Change Management As a reminder, Oso typically replaces authorization logic that would otherwise exist in your application. By using Oso, you are able to move much of that logic into separate a policy file/s, which is easier to audit and watch for changes. Currently, the best practice for policy change management is to treat Oso like regular source code. You should use code review practices and CI/CD to make sure you have properly vetted and kept a history (e.g., through git) of all changes to authorization logic. Auditing If you are interested in capturing an audit log of policy decisions, and being able to understand why Oso authorized a request, please contact us. Our Approach to Building a Secure Product Code The core of Oso is written in Rust, which vastly reduces the risk of memory unsafety relative to many other low-level and embeddable languages (e.g., C, C++). The Oso engineering team codes defensively – we make extensive use of types, validate inputs, and handle errors safely. All source code is available at our GitHub repository. Releases are built and published using GitHub actions. Oso has not yet undergone a code audit. We plan to engage a qualified third-party to perform an audit, whose results we will make publicly available, in the near future. Vulnerability Reporting We appreciate any efforts to find and disclose vulnerabilities to us. If you would like to report an issue, or have any other security questions or concerns, please email us at security@osohq.com.
Internals
Internals Oso is supported in a number of languages, but the Oso core is written in Rust, with bindings for each specific language. At the core of Oso is the Polar language. This handles parsing policy files and executing queries in the form of a virtual machine. Oso was designed from the outset to be natively embedded in different languages. It exposes a foreign function interface (FFI) to allow the calling language to drive the execution of its virtual machine. Oso can read files with the .polar suffix, which are policy files written in Polar syntax. These are parsed and loaded into a knowledge base, which can be thought of an in-memory cache of the rules in the file. Applications using Oso can tell it relevant information, for example registering classes to be used with policies, which are similarly stored in the knowledge base. The Oso implementation can now be seen as a bridge between the policy code and the application classes. The Oso library is responsible for converting types between Oso primitive types (like strings, numbers, and lists), and native application types (e.g. Python’s str, int, and list classes), as well as keeping track of instances of application classes. When executing a query like oso.query("allow", [user, "view", expense]) Oso creates a new virtual machine to execute the query. The virtual machine executes as a coroutine with the native library, and therefore your application. To make authorization decisions, your application asks Oso a question: is this (actor, action, resource) triple allowed? To answer the question, Oso may in turn ask questions of your application: What’s the actor’s name? What’s their organization? What’s the resource’s id? etc. The library provides answers by inspecting application data, and control passes back and forth until the dialog terminates with a final “yes” or a “no” answer to the original authorization question. The virtual machine halts, and the library returns the answer back to your application as the authorization decision. Data Filtering Oso supports applying authorization logic at the ORM layer so that you can efficiently authorize entire data sets. For example, suppose you have millions of posts in a social media application created by thousands of users, and regular users are only authorized to view posts from their friends. It would be inefficient to fetch all of the posts and authorize them one by one. It would be much more efficient to distill from the policy a filter that can be applied by the ORM to return only the authorized posts. This idea can be used in any scenario where you need to authorize a subset of a large collection of data. The Oso policy engine can now produce such filters from your policy. How it works Imagine the following authorization rule. A user is allowed to view any public social media posts as well as their own private posts: allow(user, "view", post) if post.access_level = "public" or post.creator = user; For a particular user, we can ask two fundamental questions in the context of the above rule: Is that user allowed to view a specific post, say, Post{id: 1}? Which posts is that user allowed to view? The answer to the first question is a boolean. The answer to the second is a set of constraints that must hold in order for any Post to be authorized. Oso can produce such constraints through partial evaluation of a policy. Instead of querying with concrete object (e.g., Post{id: 1}), you can pass a Partial value, which signals to the engine that constraints should be collected for it. A successful query for a Partial value returns constraint expressions: _this.access_level = "public" or _this.creator.id = 1 Partial evaluation is a generic capability of the Oso engine, but making use of it requires an adapter that translates the emitted constraint expressions into ORM filters. Our first two supported adapters are for the Django and SQLAlchemy ORMs, with more on the way. These adapters allow Oso to effectively translate policy logic into SQL WHERE clauses: WHERE access_level = "public" OR creator.id = 1 In effect, authorization is being enforced by the policy engine and the ORM cooperatively. Alternative solutions Partial evaluation is not the only way to efficiently apply authorization to collections of data. Manually applying WHERE clauses to reduce the search space (or using ActiveRecord-style scopes) requires additional application code and still needs to iterate over a potentially large collection. Authorizing the filter to be applied (or having Oso output the filter) doesn’t require iterating over individual records, but it does force you to write policy over filters instead of over application types, which can lead to more complex policies and is a bit of a leaky abstraction. Frameworks To learn more about this feature and see usage examples, see our ORM specific documentation: Filter Collections with Django Filter Collections with SQLAlchemy More framework integrations are coming soon — join us on Slack to discuss your use case or open an issue on GitHub.

Set up a 1x1 with an Oso Engineer

Our team is happy to help you get started with Oso. If you'd like to learn more about using Oso in your app or have any questions, schedule a 1x1 with an Oso engineer.


Was this page useful?