Performance

Memory
Files

Memory files are just like any other c‑tree data and index file. You open and close them, add, delete, and update records.

Deferred
Indexing

With deferred indexing enabled, a background thread performs key insert and delete operations on the deferred index files asynchronously.

Server-side Processing

Minimize delays associated with client-side processing and moving large amounts of data back and forth from client to server.

Advanced Indexing

Challenge: You need robust indexing capabilities, but cannot sacrifice performance.

With the standard indexing functions provided by FairCom DB and other DBMS products, Add, Delete, and Update operations cause key insert and deletes on every index associated with a data file. However, these index operations can impose measurable performance loss, especially when numerous indexes are involved. For some applications, a drop in performance does not have a significant impact and standard indexing functions suffice.

Mission-critical applications cannot afford to make any sacrifice in performance. To meet the demands of these applications, FairCom DB offers an advanced indexing process we call Deferred Indexing. By delaying selected index operations, applications can very quickly update files with large numbers of indexes. A “deferred indexing” attribute, specified on new index creation, delays key insert/delete operations for that index file (or multiple index files). NoSQL operations thus avoid the overhead of directly updating these deferred indexes. With deferred indexing enabled, a background thread performs key insert and delete operations on the deferred index files asynchronously.

In addition, an optional callback function can be registered for the data file that is called as an alternative to, or in addition to, performing key insert/delete operations for additional control and functionality.

 

1509web_deffered_indexing

 

Deferred Transaction Logging

Challenge: You need ACID-compliant transactions, but can’t take the performance hit associated with logging transactions to disk after each commit. You need blazing performance AND the ACID guarantee.

For complete OLTP ACID compliance, FairCom DB transaction logs are synced to disk with each commit operation, ensuring absolute data integrity with complete recoverability. However, that integrity comes at a performance cost. Many applications could benefit from a “relaxed” mode of transaction log writes.

To address this need, an advanced transaction mode is available where FairCom DB allows transaction log updates to remain cached in its in-memory transaction log buffer, as well as in file system cache after a transaction has committed. The challenge is to avoid index and data updates from reaching disk before the transaction entries. FairCom DB is able to delay transaction log writes to persistent storage while guaranteeing these transaction log entries for a given transaction write to disk before any data file updates associated with that transaction are written to file system cache or to persistent storage. You can think of it as ACID, where we have deferred the Durability component. The result is blazing fast performance, even under the most demanding persisted transaction requirements.

 

1509web_deffered_logging

With Memory Files

Memory Files

Challenge: You need in-memory computing.

In-memory computing with FairCom’s memory files are as easy-to-use as any standard, disk-based FairCom DB data and index files. You open and close them and add, delete, and update records. But rather than persisting to disk, these files reside entirely in memory. Memory files are ideal for read-only files that need to be accessed frequently, temporary files, in-memory list management, or other cases where guaranteed recoverability is not required. The best part is the performance is stellar, clocking operations in the hundreds of thousands of transactions per second—or faster.

TRY IT FOR YOURSELF

Explore How FairCom DB Can Achieve Your Goals

Server-Side Processing

Challenge: Minimize delays associated with client-side processing and moving large amounts of data back and forth from client to server.

Server-side processing is a proven technique for performance-driven applications. Locating intensive processing routines closer to your data source maintains locality of scope and removes latencies associated with moving data back and forth to clients. Server-side routines also enforce core business rules for further data integrity. Modifying a single server-side routine is often much easier than modifying many deployed applications. FairCom DB provides numerous server-side technologies for development ease.

Stored procedures are SQL-based routines processed directly within the server. Stored procedures also allow developing rich SQL application APIs. FairCom DB SQL stored procedure support is available for both Java and Microsoft .NET frameworks. With complete access to the full featured development APIs these frameworks offer, extremely complex server-side logic can be implemented. As procedures are stored as compiled binaries, execution performance is much faster than a corresponding SQL script.

Native C code call-back DLLs (libraries) can be built and deployed alongside FairCom DB. Numerous callback mechanisms are available within the FairCom DB Server. Row-level, file open, file close, and filter logic callbacks can all be implemented for precision control over your data handling.

For the most intimate control, custom c‑tree APIs can be created. With the FairCom DB Server SDK, modules are provided to create your own direct APIs for nearly any functionality you can imagine. And, being located directly in the heart of FairCom DB’s API stack, providing access to all core FairCom DB functionality without any inter-process communication (e.g., TCP/IP or Shared Memory), the performance is exceptional.

Granular Cache-Level Support

Caching

Challenge: You need in-memory speed with durability and recoverability.

Minimizing the system I/O is FairCom DB’s most effective method of assuring superior performance. The more your design can eliminate the need for I/O, the better your efficiency. FairCom DB uses sophisticated hashed caching algorithms for both data and index files to minimize the amount of data actually transferred to and from the external storage medium. Its hashing algorithms and unique index mechanism within the cache management subsystem allow FairCom DB to use large amounts of memory to boost data throughput.

Additionally, by using the granular cache-level support provided by FairCom DB, you can store your entire database in cache thereby providing the performance of a pure memory database, while concurrently persisting your data to disk. This option provides close to “in-memory” performance with the data integrity of storing data to disk. If you don’t need to store your data to disk, use the FairCom DB memory file support which is a pure memory database implementation.

Work in Bulk

Batch Operations

Challenge: You need to prevent communication latency slowing performance.

FairCom DB’s batch processing support greatly improves performance by operating on groups of related records at once. Batches are used for bulk record handling between client and server, greatly removing a large degree of communication latency. We support batch record retrievals, adds, updates, and deletes. Retrieval can be in indexed or physical bulk order. In addition, index ranges can be employed for fine control of retrieved data.

Granularity in Data Access Priority

Efficient, Multithreaded Core Engine

You need the performance to handle multiple users pounding the database at the same time.

FairCom DB is highly multi-threaded. Performance and response characteristics of the FairCom DB architecture are unmatched by traditional relational databases. The total amount of throughput achieved actually increases when the first additional users are added. FairCom DB utilizes previously unused time spent waiting for locks, I/O operations, etc. The efficient manner through which FairCom DB utilizes low-level threads is highly optimized for each platform supported by FairCom DB to deliver exceptional performance.

FairCom DB Also Supports:

Temporary indexes

Boost performance by making a single function call to create a temporary index. FairCom DB will automatically purge the index once the application closes.

Fixed and variable-length records

Full support for both fixed, and variable length records including automatic deleted record collation and efficient reuse of space.

Data and index compression

Several methods of data file compression are included, including an ability to add your own compression algorithms. Indexes can be compressed using padding and leading character compression methods.

TRY IT FOR YOURSELF

Explore How FairCom DB Can Achieve Your Goals

×