Use a database that gives you total control. Learn more…
The FairCom benchmarking team achieved these results on one Dell R840 server with 72 cores, 768 GB RAM and 25.6 TB of NVMe SSD.
The test inserts billions of records in parallel across 70 sharded tables. Each table uses fixed-length records, has no indexes and no transaction processing.
This reference application demonstrates how you can use FairCom DB to control every aspect of the insert process. It uses the following features to configure the database for maximum insert performance:
FairCom DB is an advanced database engine that gives you ultimate control to achieve unprecedented performance with the lowest total cost of ownership (TCO).
FairCom DB delivers predictable high-velocity transactions and massively parallel big data analytics. It empowers developers with NoSQL APIs for processing binary data at machine speed and ANSI SQL for easy queries and analytics over the same binary data.
You do not conform to FairCom DB…FairCom DB conforms to you.
With FairCom DB, you are not forced to conform your needs to meet the limitations of the database. You can conform FairCom DB to meet your business needs, giving you the database you need to meet your core-business needs — fast, reliably and efficiently.
c-treeACE is now FairCom DB
a few of our key customers
Try it Now!
FairCom DB V12
Run FairCom DB anywhere & everywhere
NAV API Overview
FairCom’s NAV API gives your application an unprecedented level of control over every database action. NAV allows your favorite programming language to “remote control” the database. It can find any record (or a subset of records), move to next and previous records in index or data order, join to specific records in other tables, choose which records to lock, which records to include in a transaction, and which records to retrieve, insert, update, and delete. NAV allows you to save the position of records and immediately jump back to those positions to retrieve records without index lookup.
Use NAV to create an algorithm to process your data exactly the way you want with total control for extreme performance and efficiency. You can use any algorithm – including hundreds of existing graph algorithms. NAV also gives you automatic behaviors that make writing code easy and intuitive.
Use NAV to navigate from record to record and table to table using a wide variety of techniques, such as:
Use NAV to save and restore multiple record positions to instantly retrieve records or move a cursor to any record – without index lookups. With NAV, record positions can be immutable, making them safe to store inside records as “pointers” to other records. This makes graph traversal easy and fast.
Use NAV to control if and how a record gets locked automatically or manually. You can combine automatic and manual locks for ease of use and total control. For locking records manually, you can use read, write, blocking, non-blocking, and exclusive locks. For automatically locking records while your application navigates records, use session lock modes, such as no locks, read-only locks, read-write locks, blocking locks, non-blocking locks, etc. You can suspend and resume automatic locking at any time and easily release all or some locks created by your session. You have the ability to freely intermingle SQL and NAV because locks are compatible across both.
With NAV you have the ability to create all-or-nothing, isolated transactions that span multiple operations across many records, such as removing an amount from a debit record and adding it to a credit record. You can control when transactions begin, end, are nested, and whether specific records participate in a transaction. NAV allows you to create save points along the way and roll back to them as desired. NAV’s automatic transactions make ACID-compliant navigation easy, and NAV’s manual transactions provide total control to achieve extreme performance and any degree of consistency from eventually consistent to ACID compliance. You can also freely intermingle SQL and NAV because transactions are compatible across both.
No need to go alone.
The highly rated FairCom Support and Professional Services teams are here 24/7/365.
FairCom DB does not limit you to one cluster topology. FairCom DB’s built-in Replication Manager makes it easy to mix and match data replication and failover to create any cluster topology you need for horizontal scalability and high availability.
Implement any cluster scenario including:
Control replication at all levels from individual data files to entire databases. Publish data once and subscribe to a publication many times across many servers. Don’t worry about performance because FairCom Replication uses multiple threads to continuously stream all changes at high speed, and it runs outside the database process to ensure database performance is not impacted.
Automatic failover options:
A browser-based graphical user interface, called Replication Manager, runs in a central location to configure, manage and monitor data replication across hundreds of servers and thousands of tables and files. FairCom Replication can also be automated through a JSON/HTTP web service API and a C/C++ API.
The left side of each continuum is "Total Control" - which describes uniquely useful capabilities of the FairCom Database. The right side is "Traditional Control" where the FairCom Database operates like a SQL database. In between are multiple levels of control.
You choose your ideal level of control within and between continuums.
Run SQL queries over custom binary data and simultaneously traverse SQL rows using powerful NoSQL APIs.
Start developing with standard SQL APIs (on the right) and work through the lower-level APIs as you need more control.
Control every bit in every record and index…
The Low-level File API provides direct control over each data and index file.
You can read, write, process and index binary data without limitations at extreme velocity.
You can directly store complex, nested binary data in a record, such as BSON, Google Protocol Buffers, Google FlatBuffers, MessagePack, Apache Avro, etc.
You can manually populate indexes with any value for navigation and SQL queries. It is useful because it allows you to index anything, and it is extremely fast because it eliminates conversion layers.
Binary Record API
Store binary data as is, automatically index it, traverse indexes…
Binary Record API
The Binary Record API is ISAM. It provides direct control over binary records.
You directly control all reading, writing, finding, saving, positioning, navigating, locking and transacting at the record level.
You can store any binary data in a record.
The database can automatically index non-nested segments in each record. A segment is a field or part of a field. Segments can overlap.
You can write callbacks in C and C++ to extract fields out of nested binary structures so the database can automatically index them.
You can find records by indexed values, and navigate across binary records in index order and in data order — at extreme velocities.
This is useful because you can write algorithms to process data in custom ways that no other database can.
ANSI SQL API
Run SQL over binary and relational data…
ANSI SQL API
SQL provides easy, traditional access to relational data.
You can run SQL queries over the same binary data created and managed by the NAV, ISAM and Low-level APIs. Thus, you can combine the productivity of SQL with the power of binary.
You can use SQL to create, index and query tables.
You can automatically leverage an optimized join engine for fast filtered joins.
You can create and use stored procedures, functions and triggers.
This is useful because it maximizes developer productivity. Use SQL when it is the best fit, and use the NoSQL APIs when they are the best fit.
Developers have an unprecedented level of control over every aspect of the database.
Database behavior and APIs are predictable so a developer can consistently deliver high-performance solutions.
Modify source code with full support…
You can modify the source code with help and full support from FairCom engineers who write and maintain the database code.
This is useful for modifying database behavior, adding capabilities, adding custom security handshakes, shrinking the footprint, etc.
Safely extend the database…
You can safely extend the database using plug-ins.
Plug-ins are dynamic link libraries that run when the database runs.
Your compiled code runs in the same process as the database with full and independent access to all its functionality.
Your plug-in runs continuously and can do anything. It is a client of the database that can process data at unprecedented speeds.
This is useful for building server-side processes, adding communication protocols, embedding app servers, etc.
FairCom ships plug-ins that provide the following services:
Quickly process events…
The database can call your C and C++ code synchronously or asynchronously at all critical points of data processing.
This allows your code to do anything in response to data changes, such as populating custom indexes, converting data types, implementing proprietary encryption and compression, replicating data, notifying other applications, etc.
Use C, C++, Java, C#, VB, Node.JS, Python…
These drivers make it easy for you to do everything with the database in your favorite language.
Because these NoSQL APIs remote control the database, they make the database an extension of your favorite language. It is the opposite of stored procedures, where you have to write special code that the database runs. Instead, you write code in your application that forces the database to do exactly what you want in the way you want. The drivers allow you to "remote control" the database to accomplish things that no other database can do.
You directly control all reading, writing, finding, saving, positioning, navigating, locking and transacting at the row level.
You can also connect to multiple databases at the same time, which allows you to simultaneously remote control multiple databases. For example, you can look up a record in one database, use a column value to look up a set of matching records in another database, take the matching records and filter data in a query against another database. Thus, you can easily create and manage sharded data across databases with unprecedented control. This is vital for high-performance clustered applications because a one-size-fits-all database cluster from other database vendors cannot process data anywhere near as effectively as a custom algorithm.
SQL drivers make it easy for you to run SQL from your application.
You can use SQL and NoSQL drivers simultaneously in your application to get the best of both worlds.
Java & C# in SQL
Use SQL to call Java and C# stored procedures…
Java & C# in SQL
You can use Java and C# to write SQL triggers, functions and stored procedures.
This is useful because Java and C# allow you to do anything in response to a database event.
Process native CPU data types at machine speed using NoSQL APIs and simultaneously run SQL queries over the same data.
Custom Data Types
Store binary data as is, index and process it as relational data in SQL…
Custom Data Types
Create a custom data type by simply storing any binary data structure inside a record or inside a field.
You can index the data any way you want because FairCom indexes are designed to work with user-defined binary values. All index processing is done at the binary level: field extraction, comparison, sorting, lookups, traversal, etc.
You can write a function to make a binary data structure appear as traditional columns in SQL queries. For example, FairCom's c-treeRTG product contains functions that make COBOL and Btrieve data appear as traditional SQL data.
You can achieve unprecedented performance and convenience working at the binary level.
Machine Data Types
Process native CPU data types at machine speed: 2GB arrays, 2GB strings…
Machine Data Types
Store and process native CPU data types at machine speed.
Run machine data types across different CPU architectures with endian compatibility.
Leverage the following machine types:
Use derived types that are implemented as native machine types:
These standard machine types are directly supported and optimized by C compilers, and they are compatible with the native types of most programming languages.
SQL Data Types
Run SQL over native machine data types: VARCHAR, VARBINARY, BIGINT…
SQL Data Types
Use a wide-variety of SQL data types:
The benefit is that SQL types are internally implemented as machine data types. This makes it easy for SQL to query data created and processed in the NoSQL APIs. And, it allows NoSQL APIs to process data a machine speed.
Index anything: arrays of bytes, binary structures, fields in records and SQL columns.
Control each byte indexed…
Manually create binary value(s) to be indexed for each record.
Index parts of any binary structure – no matter how complex.
Store any binary structure in a record while storing traditional values in indexes.
For example, you can directly store the binary values of BSON, Protocol Buffers, FlatBuffers or MessagePack in a record. You can write a callback function to extract the values you want to be indexed. The database uses the indexed values for navigation and SQL queries. You can also write a callback function to extract values on demand for SQL queries to process as columns and rows. This is the best approach when you want the fastest possible writes.
Alternatively, you can create a callback function that extracts data to be written as rows and columns. The database can then index and query columns directly without additional callbacks. This is the best approach when you want fastest possible queries.
These two approaches give you complete control over database performance. You can make the desired trade-off between insert/update performance and read/query performance.
Index binary data structures…
Binary Structure Indexes
Automatically index segments of a binary structure as if they were SQL columns.
Automatically index bytes at absolute and variable positions.
Treat a sequence of bytes as a native data type, such as a Signed Integer, Unsigned Integer, BCD, etc.
Independently collate each segment in ascending or descending order.
Automatically flip bytes in an index to enable fast leading wildcard lookups, such as using "*son" to find "Anderson", "Carlson", etc.
Create a custom collating sequence that automatically transforms indexed bytes into desired binary values, such as lower case with no diacritics collated in a custom sort order.
Selectively index records…
Control which records are included in an index by simply using a filter that works like a SQL WHERE expression.
This greatly speeds queries because indexes contain a smaller set of items.
You can create many filtered indexes on a table for fast pre-filtered queries.
Because extra indexes slow down inserts, updates and deletes, you can use deferred indexing to update these indexes asynchronously.
Index one or more SQL columns…
Quickly, easily and automatically index one or more columns like all SQL databases.
SQL makes it easy for anyone to create a process, and query tables and indexes.
Indexes created in SQL can be used in lower-level APIs, and vice-versa.
One benefit is that you can easily create tables and indexes in SQL and use the powerful NAV API to process data with precise control at high speed with predictable performance.
Eliminate latency between applications and databases and maximize deployment flexibility.
Embed the database in app servers…
Eliminate latency between the application and database because compiling the database into the application allows both to run in the same process on the same server.
Eliminate database deployment because the database is embedded in the application.
Eliminate database administration because your application can directly control all database operations through APIs and because the database is designed to run without human intervention. The database efficiently heals and tunes itself.
Eliminate version incompatibilities between the database and the application.
Even when the database is compiled or linked into an application, other, external applications can simultaneously communicate with the database.
Run microservices at exceptional speed…
Eliminate latency between the application and database because they both run in the same process on the same server.
Eliminate separate database installation because the database is linked directly into the application.
Easily upgrade database to new versions by simply deploying a new dynamic library file.
The entire database is embedded in a single file, which is a DLL on Windows, an SO on Linux and Unix, and a DyLib on macOS.
External applications can also communicate with any embedded database when you enable the optional external communication protocols.
An embedded database allows a single application to scale to tens of thousands of ACID transactions per second on a single server.
It is easy to run multiple applications on the same server, and each application contains its own database.
An embedded database excels at running massively parallel analytics.
An embedded database is ideal for microservices because each app server has its own database linked directly into it. This eliminates latency, greatly simplifies deployment, and allows an application to scale horizontally to millions of transactions per second by simply adding more app servers. Each app server can use FairCom's built-in replication to keep its data in sync with other app servers and with central databases. Each app server can also query and modify data on central databases and each other. The architectural possibilities are limitless.
Scale horizontally and flexibly...
FairCom DB uses TCP/IP to connect with applications running on separate servers.
FairCom DB drivers automatically connect to the database using the fastest protocol. When applications and the database are running on the same server, the driver connects using shared memory. When they run on separate servers, it connects through TCP/IP. This makes connections easy to manage, and it provides maximum flexibility and scalability for application and database deployments.
Applications running in separate servers can connect to many databases running on separate servers.
Data can be sharded across servers for virtually unlimited scalability.
This option makes it easy to scale horizontally.
It is ideal for cloud deployments.
Multiple servers can scale to millions of ACID transactions per second (TPS).
Combine some or all of the following clustering technologies to create a custom clustering topology that matches your exact needs.
Custom scatter writes and gather reads…
The FairCom Database Engine is unlike any other database in that it is designed specifically for custom clustering. It is ideal for creating highly customized clusters to meet the exact needs of a highly available, horizontally scaled application.
FairCom's NoSQL and SQL APIs are designed to connect to multiple FairCom databases at the same time. These databases can be running on any FairCom server running anywhere. The APIs can process data across all connected databases as if they were one large database.
An application can easily partition records using any algorithm to scatter and gather data across databases running on multiple servers.
FairCom's NAV API makes it easy to "remote control" databases by walking indexes and data files one record at a time or in batches of records. This makes it easy to write custom algorithms to gather and join data that is spread across multiple tables in multiple databases. Below are three examples:
Precise control over locking is a key ingredient in creating custom clusters that range from eventually consistent, ACID-compliant or anything in between. FairCom NoSQL APIs give applications full control over record locking from automatic to manual.
Combine custom scatter and gather operations with data replication and automatic failover to put the final touches on a custom cluster.
Asynchronous Bidirectional Replication
Create eventually consistent clusters…
Clustering with Asynchronous Bidirectional Replication
The FairCom Replication Manager user interface makes it easy to create eventually consistent clusters that allow the application to write to any server in the group at any time. No matter where data is written, it is replicated to one or more servers using asynchronous bidirectional replication. This allows a cluster to span multiple data centers for data locality, high availability and scalability.
When one database fails, another immediately takes over without downtime.
Because the data is not shared across servers, you can use local SSD storage for maximum predictable real-time performance without compromising high availability.
This option can be combined with some or all of the other clustering options to create the optimal cluster topology. For example, it can be combined with sharding to spread data horizontally across the cluster for high scalability. Asynchronous bidirectional replication is applied to each shard to replicate data to one or more servers. FairCom Failover can be applied to each group of replicated servers to ensure the application always can write to a running server – even if some have failed. This makes each shard highly available. Because the replicated servers can run in any data center and can be written to simultaneously, the cluster is eventually consistent, highly scalable and highly available.
Create ACID-compliant clusters…
Clustering with Synchronous Replication
The FairCom Replication Manager user interface makes it easy to create shared-nothing, high-availability groups, ensuring two or more database servers always contain copies of the exact same committed data.
When one database fails, another immediately takes over without downtime.
Because the data is not shared across servers, you can use local SSD storage for maximum predictable real-time performance without compromising high availability.
This option is beneficial for creating highly available, high-velocity solutions across two or more servers in the same data center.
This option can be combined with some or all of the other clustering options to create the optimal cluster topology. For example, synchronous replication can be combined with FairCom Failover to create a highly available cluster. To create a disaster recovery solution, a synchronous cluster can be combined with another server in another data center using asynchronous unidirectional replication.
Replicate reference data…
In-memory replication can be combined with some or all of the other clustering options to create the ideal cluster topology. The database can automatically replicate data from read/write tables in a central database to read-only, in-memory tables located in many other databases.
The data is durable in the central database. In the other databases, it is cached in RAM for maximally fast local data processing.
This feature is beneficial when multiple database servers need to join to the same shared data. Because shared data is replicated into the RAM of each server, the database can quickly join shared data with local data.
When data changes in the central database, it is immediately updated across all other databases.
When a replicated database is rebooted, it automatically retrieves the replicated data from the central database.
This option is ideal for real-time querying and joining of the same data across many servers, such as master data, reference data, metadata, data dictionaries, configuration data, etc.
Automatic failover can be combined with some or all of the other clustering options to create the ideal cluster topology. FairCom DB supports three types of automatic failover.
FairCom Failover detects failures in a cluster and automatically fails a primary server over to a secondary server. More than one server can be added to the failover group when the highest possible availability is required. Each server has independent storage for higher availability and performance. Both servers are active. Data replication is used to keep the data up to date on all servers in the cluster. Synchronous replication is used for ACID-compliant clusters where the primary server is read-write and the secondaries are read only. Asynchronous bidirectional replication is used for eventually consistent clusters where all servers in the cluster are read-write.
Linux Cluster uses the operating system to detect and fail over from a primary to a secondary server. The FairCom Database Engine integrates with the OS to make this work. There are two cluster options:
Windows Cluster works similarly to the Linux Cluster with the same setup options, advantages and disadvantages.
TRY IT FOR YOURSELF
Explore How FairCom DB Can Achieve Your Goals
Simultaneously use the full range of FairCom DB consistency options from 100% ACID compliant to eventually consistent.
There are four aspects of consistency: Atomicity, Transaction Consistency, Isolation and Durability.
The next four continuums of control illustrate how you can customize consistency of each database and each table to meet the precise needs of your application.
FairCom DB gives you control over how transactions commit and rollback as a group
Extremely fast inserts, updates and…
FairCom DB allows individual tables to support transaction logging or not. When transaction logging is not enabled, the data operations performed on a table are not under transaction control – they are non-atomic.
Tables with non-atomic transactions run much faster because the database does not need to maintain transaction logs. The downside is that transactions cannot be rolled back automatically and data replication does not work without transaction logs.
Non-atomic transactions are ideal for:
For example, Apache Cassandra is optimized for non-atomic transactions. This is a key reason why it is fast at writing data and why it is good for all the use cases listed above. FairCom DB's non-transaction files are faster than Apache Cassandra because its engine is written in C and has been tuned for 40 years. FairCom DB can achieve millions of inserts per second on a single server, and it can scale to hundreds of servers. It takes dozens of Cassandra servers to come close to FairCom DB's performance.
FairCom DB provides atomicity with and without transaction logs. With transaction logs, it provides full ACID compliance. Without transaction logs, it provides atomicity and isolation with a Continuum of Control over durability that trades a small risk of losing some data for significant increases in performance.
Atomicity with Transaction Logs
Atomic transactions ensure one or more insert, update and delete operations all succeed or all fail as a group. This is a cornerstone of ACID compliance.
This option is ideal for applications that process complex multi-record transactions within and across tables that must all succeed or all fail as a group.
By default, the FairCom DB database maintains transaction logs, and its tables log changes to them. These tables have atomic transactions.
Transaction logs provide many benefits:
Atomicity Without Transaction Logs
FairCom provides a unique option to configure individual tables not to use transaction logs and to still have atomic and isolated transactions. This valuable ability is made possible by FairCom DB's pre-image technology, which provides atomicity and isolation without the performance cost of transaction logs.
This technology is useful when you need extreme performance along with isolation and atomicity. This comes with the risk that an abnormal server termination may lose some data and some indexes may need to be rebuilt. FairCom DB gives you options to mitigate this risk by forcing the table and its indexes to write all changes immediately to disk while caching them in RAM for high performance queries. FairCom provides another option to mitigate this risk by flushing the file's cache to disk on a regular schedule or on demand when the application has finished saving critical data.
FairCom DB's pre-image feature gives you the ability to have extreme write performance, atomicity and isolation along with precise control over the level of durability – without the performance cost of transaction logs.
FairCom DB's pre-image technology provides the best features of Multiversion Concurrency Control (MVCC) found in PostgreSQL without the cost of periodically running cleanup processes to remove outdated data from indexes and tables. It works similarly to Oracle Database's UNDO and REDO logs, but it does not have the performance cost of writing logs to disk. It provides faster performance than Cassandra while providing atomicity and isolation that Cassandra cannot offer.
FairCom DB provides a Continuum of Control over consistency to allow each type of record to meet required trade-offs between performance, capability and consistency.
ACID Consistency ensures, that at a single point in time, the same piece of data always has the same value when it occurs in multiple shards, tables, indexes and queries. Eventual Consistency allows the same piece of data to be different wherever it occurs while eventually making it the same. Partial consistency lies in between.
FairCom DB allows each type of record to have the precise type of consistency it needs.
Eventual Consistency allows the same piece of data to be different across multiple shards, tables, indexes and queries. The data eventually becomes consistent. All aspects of the database perform faster because they do not have the overhead of point-in-time consistency.
Eventual Consistency requires a developer to deal with additional challenges:
FairCom DB gives you full control over Eventual Consistency to reap its benefits and deal with its challenges.
Per table consistency…
FairCom DB allows you to control the consistency of each table: whether it is Eventually Consistent, ACID Consistent, Preimage ACID Consistent, Not Consistent, or Temporarily Consistent.
A Continuum of Control over consistency is vital for all applications. Each type of record (i.e. table or data file) has different requirements for performance, capabilities and consistency.
Per table, you can implement Eventual Consistency by turning on transaction logging, asynchronous bidirectional replication and manual locking. Eventual Consistency works best when global availability and scalability are most important.
Per table, you can implement ACID Consistency by turning on transaction processing and automatic locking. ACID Consistency works best when data consistency, query isolation and all-or-nothing transactions are most important, such as banking transactions, precise control of mission-critical data, concurrent updates without conflicts, etc.
Per table, you can implement Preimage ACID Consistency by turning on Preimage transactions and Direct IO. The table has all ACID capabilities without transaction logs, which greatly increases write performance. It works best for collecting high-velocity data for real-time analytics, gaming leaderboards, stock market transactions, etc.
Per table, you can implement No Consistency by not using transaction logging and locking. This greatly increases write performance, but it decreases durability and disables atomicity and isolation. No Consistency works best for locally cached data, temporary tables, bulk-loading data into a data warehouse, collecting data from many devices, etc.
Per table, you can implement Temporary Consistency by temporarily disabling transactions on a table created with transaction logging. This allows the table to perform data operations at exceptional velocity.
Manual locking is a Continuum of Control by itself. It is available in all consistency models. Through the NoSQL APIs, you can use read and write locks as desired to create any level of consistency and isolation.
Tables without transaction logging, such as the last three consistency examples, have much faster write performance because data is written only once, but they have limited durability and capabilities. They cannot participate in point-in-time backups, restores, audits, data replication and high-availability clustering.
Per index consistency…
During ACID-compliant transactions, each index added to a table further slows inserts, updates and deletes because all indexes must be updated before a transaction can commit.
FairCom DB provides two types of index consistency: normal and deferred.
Deferred indexing introduces inconsistency between data in a table and its indexes. Index data quickly catches up to the table data, but until it catches up, queries return inconsistent results.
Deferred indexes work well for several use cases:
Consistency across servers…
You can use FairCom's two-phase transaction API to cause a transaction to span multiple database servers.
This ensures all data in a transaction succeeds or fails on both servers with ACID compliance.
FairCom DB supports ACID Consistency on a per table basis. FairCom's SQL engine automatically creates and uses tables with ACID Consistency. FairCom's NoSQL APIs create and use tables using a continuum of options ranging from Eventually Consistent through ACID Consistent. This gives your application total control over the transactional design of each table.
Regardless of a table's consistency model, FairCom's SQL and NoSQL APIs can process it simultaneously. This is a unique and powerful capability of FairCom DB.
ACID Consistency requires:
In other words, all transacted data in the database transition from one state to the next according to the database's rules and according the ACID-compliant rules of atomicity, consistency, isolation and durability.
FairCom's transaction engine implements transaction processing in a unique way that supports each table having a different Continuum of Control from ACID to Eventually Consistent.
In addition, transaction logging enables hot backups, point-in-time restores, transaction auditing, data replication, high availability and global scalability without impacting database performance.
Isolation ensures your queries and transactions are not visible to other users and vice versa.
You can control the isolation level of each database, table and row.
Exceptional read speed…
No Isolation is the default for FairCom's NoSQL APIs. This is one reason they are so fast.
FairCom tables have slightly different isolation behaviors when they are created with transaction logging or not.
Non-transaction tables provide no isolation by default.
Transaction tables provide isolation from uncommitted changes.
FairCom SQL provides a minimum of Read Committed isolation.
Isolate exactly as much as you want…
Performance speeds up as isolation decreases. Thus, FairCom DB's NoSQL APIs default to no isolation for maximum performance, and they allow you to add locks as needed to create your desired level of isolation.
FairCom SQL supports two of the four SQL isolation levels:
SQL clients can specify either option per connection. Through configuration, FairCom DB can globally limit SQL isolation to Read Uncommitted only or both.
SQL uses locks to enforce isolation to ensure the SQL and NoSQL APIs work seamlessly over the same data.
SQL Read Committed
Automatically isolate from uncommitted writes…
SQL Read Committed
Read Committed is the default Isolation Level for SQL queries in FairCom DB. It ensures a SQL query sees no data changes made by concurrent users. It allows a query to see previous uncommitted data changes made by its own transaction. If the query is repeated in the same transaction, it will include committed transactions from other users who subsequently inserted, deleted or updated rows. It will never include uncommitted changes from other users.
SQL uses locks to enforce isolation to ensure the SQL and NoSQL APIs work well together over the same data. Thus, when Read Committed Isolation is not fast enough for an operation, you can use the NoSQL APIs to control locks more precisely for maximal speed. Also, when you need more isolation, you can achieve the precise amount you want with the NoSQL APIs.
SQL clients can specify Read Committed Isolation when they connect. Through configuration, FairCom DB can globally limit SQL Isolation to be Read Committed.
Read Committed is the default Isolation Level for many SQL databases including SQL Server, Oracle, DB2, PostgreSQL, etc.
SQL Repeatable Read
Automatically isolate from new rows…
SQL Repeatable Read
SQL Repeatable Read is an optional Isolation level for SQL queries in FairCom DB. It ensures a query sees no data changes made by concurrent users. It allows a query to see previous uncommitted data changes made by its own transaction. And unlike Read Committed Isolation, if the query is repeated in the same transaction, it will not include rows that were subsequently updated or deleted by concurrent users, but it will still include committed inserts from concurrent users. It will never include uncommitted changes from other users.
SQL uses locks to enforce isolation to ensure the SQL and NoSQL APIs work well together over the same data. Thus, when Repeatable Read Isolation is not fast enough for an operation, you can use the NoSQL APIs to control locks more precisely for maximal speed. Also, when you need more isolation, you can achieve the precise amount you want with the NoSQL APIs.
SQL clients can specify Repeatable Read Isolation when they connect. Through configuration, FairCom DB can globally limit SQL Isolation to be Repeatable Read.
Durability ensures your data is stored safely on persistent storage. FairCom DB gives you a Continuum of Control to achieve unprecedented performance and durability.
Fastest storage, period…
In-memory tables provide the fastest possible performance, and when you put them in non-volatile RAM, they provide full durability.
You can optionally create them with preimage atomicity, consistency and isolation, which means they can fully participate in transactions with other tables without using transaction logs. They support all transaction features including all-or-nothing commits, rollbacks and savepoints. When they are persisted in non-volatile RAM, they become durable and are, thus, fully ACID compliant.
They are useful for any table that can fit entirely in RAM.
In-memory tables are not automatically backed up, but you can easily write their records into a table that can.
In-memory transactions are not included in transaction logs, which means in-memory tables cannot participate in data replication, but they can be replicated using in-memory replication.
Fastest transaction IO…
You can configure transaction logs to be cached by the operating system for up to 3x faster performance. This is called Delayed Durability.
Without Delayed Durability, FairCom DB immediately flushes each committed transaction to the transaction logs. This ensures each committed transaction is physically on disk.
With Delayed Durability, FairCom DB allows the operating system (OS) to cache transaction logs and eventually write them to disk. This greatly increases the speed of transactions, but if the OS or hardware fails before a transaction is flushed to disk, that transaction is lost and the transaction file may contain incomplete data.
To mitigate the risk of losing transactions, you can do the following:
Delayed Durability is beneficial for applications that need maximum speed and the benefits of transactions while accepting the low risk of losing around one second's worth of records.
Fastest non-transaction IO…
Direct IO Durability
When you turn off transaction logs, all databases have the potential to lose or corrupt data during an abnormal server termination because there is no transaction log to restore the data. With FairCom DB, this risk can be mitigated because most tables remain under transaction control while a few preimage and non-transaction tables use Direct IO for durable writes.
FairCom DB is unique in that it provides different types of tables optimized for various levels of performance: transaction tables, in-memory tables, preimage tables and non-transaction tables.
You can configure preimage and non-transaction tables to use Direct IO. This ensures data is written directly to storage, where it is protected from outages. Direct IO bypasses the operating system's asynchronous model of flushing cached data to storage and gives you direct control of when data is flushed to storage. You can configure data to flush immediately, flush every N seconds, flush every N bytes or flush on demand.
Using FairCom's NoSQL API to flush data on demand is particularly useful. It allows you to persist critical data immediately after it is written, while allowing less-critical data to be cached and written periodically. This makes Direct IO fast and durable without transaction logs.
Direct IO is the fastest form of IO when one process exclusively writes to a table – even while many concurrent processes read from it.
In-memory, preimage and non-transaction tables are well suited for temporary tables, locally cached data, etc. For example, local in-memory tables can collect high-velocity data for real-time analytics and gaming leaderboards. Dedicated writer threads can rapidly persist the in-memory data into non-transaction tables that use Direct IO. Multiple reader threads can process these tables in parallel for high-speed machine learning. No other database can compete with this level of performance, which has been benchmarked at millions of inserts per second on a single server.
Durable NVMe storage…
FairCom DB can mirror the data of any table (except for in-memory tables) across two storage devices. If one device becomes unavailable, the database automatically uses the other. This increases the availability of the database and increases durability.
You can also mirror FairCom DB's local control files, which include security, transaction logs and transaction start files. Using Table Mirroring for these files is essential to achieve high availability and high performance when local storage devices are not configured as RAID 10 arrays.
You can use Table Mirroring to make local NVMe storage highly available. This is important because NVMe is the fastest form of storage outside of non-volatile RAM, and it is cost effective. But it is not highly available because it cannot participate in RAID arrays.
You can also use mirroring to place copies of files on both local and remote storage for higher availability, but likely slower performance.
Mirroring is as slow as the slowest storage device; so, it is important to mirror across similar storage devices.
When you are not using NVMe devices or mirroring across local and remote storage, hardware accelerated RAID storage is preferable.
By default, FairCom DB provides ACID Durability. Each transaction is committed to a transaction log that is immediately flushed to disk. Data is shortly thereafter written to table files. Periodically, the files are backed up.
The data is durable because it is in multiple places: data files, transaction logs and backups.
Backups and transaction logs can restore transactions to any point in time.
FairCom DB provides many options to achieve exceptional speed and retain durability:
Hardware and OS
FairCom Database Engine runs on most hardware and operating systems.
See what is new!
c-treeACE lives on under a new name, FairCom DB. The same great technology you have come to expect from the FairCom c-tree product family with new improvements and a new name, FairCom DB. The upgrade to the new FairCom DB V12 from c-treeACE V11 or earlier versions of c-tree is as easy as ever.
To upgrade from c-treeACE, contact your FairCom account executive or contact us here.