Abstract

High performance computing in financial and other industries is constantly challenged by growing demand for faster results and larger volumes of data to process. FPGA - specially designed hardware units - through their ability to execute complex tasks in parallel - can substantially increase the throughput of systems and reduce data centre load. For many popular tasks a couple of machines with FPGAs on-board can replace a cluster of 50+ nodes.

Please follow this link for the latest version of this article.

Introduction

Modern financial and investment organizations require extensive computations - calculating risks of portfolios and positions, yields of fixed income products, prices of securities and their derivatives and so on. While the speed of modern computers is rapidly growing, so does the volume of required work, and only those businesses, that are able to develop fast systems can maintain their competitive edge.

Current solution

A popular approach in today's financial world to HPC (High Performance Computing) is referred to as "grid". A grid is a cluster of anywhere between 8 and 100+ standard general-purpose machines, connected into a single network, each doing a single piece of calculation. This solution has a number of advantages, namely simplicity of the model, usage of common and inexpensive hardware and development with popular high-level tools. However, in most organizations there is a natural limit on how large these clusters can be - the constraints are usually electricity consumption, the availability of space in data centres and cooling capacities. Also, there is a theoretical limit for the grid size - communication lines between CPUs have finite throughput. Overall performance of the cluster in some cases will grow only logarithmically when new machines are added to the cluster and in some cases will not grow at all. Proper maintenance of such systems is not cheap - keeping a network of 100+ machines healthy 24/7 can easily cost hundreds of thousands USD per year.

Those limitations are so serious, that some problems simply cannot be addressed due to the costs involved - if one wanted to increase the speed (or volume) of calculations by an order of magnitude, it wouldn't be feasible with cluster approach - developing a 500-1,000 strong grid is extremely expensive and complicated and won't necessarily bring the desired performance.

FPGA can help to solve HPC problems

Here is where dedicated hardware can help - by putting common computation-intensive parts of algorithms on a specially designed FPGA (Field-Programmable Gate Array) we can increase performance by an order of x20-500 !

FPGA versus general-purpose CPU

FPGA is essentially a chip, usually containing millions of programmable low-level components and interconnections. The components include basic logical operations (boolean AND, OR, NAND, XOR etc.), which can be connected to each other to form any algorithm. Therefore FPGA can be seen as a hardware analogue of a program. The important feature of FPGA is that it can be programmed to execute a relatively complex computation very fast, thanks to its ability to perform parallel calculations. It is possible to have the same algorithm executed simultaneously for 10 or more independent inputs, or for many input values - in only a few clock cycles. This is the feature, which allows to achieve the performance improvements mentioned above.

An FGPA is populated with a number of algorithms and is attached to a standard PCI board (to be installed directly into the host PC) or in a standalone appliance. The CPU can ask it to execute the supported tasks, in a very simple fashion, not dissimilar from calling an external software routine.

The challenge, presented by FPGAs is that it needs to be designed for the particular task, while a general-purpose CPU can execute any task. However, for demanding algorithms the effort is well worth it - a single properly designed FPGA can replace from 10 to 200 normal CPUs.

It is important to understand, that once an FPGA is "programmed" it is very non-trivial to update its design - the development process itself is costly and the physical operation of bringing the new code to the chip is complicated. Therefore FPGAs are bes suited for algorithms, which can be developed once and then used without modifications for prolonged periods of time.

Why FPGA has become an option recently?

Until now, FPGAs have been too small in terms of capacity and too expensive in terms of hardware and development to be a useful option for HPC. Recently, however, FPGAs have been improving in capability much faster than general CPUs - and this trend is expected to continue. As a result, FPGAs are becoming widely adopted in mainstream embedded computing to build complex mission-critical systems. Their attractiveness to the high-performance computing sector has at the same time been growing steadily.

Recent publications on the Internet, such as this article , the following blog entry , a research grant proposal and many others, talk about FPGAs being used in investment banking, oil and gas and other industries, where HPC is important.

Conclusion and how we can help

For task, which require very high performance, and which contain key algorithms, which won't change over time, FPGA can be a feasible solution. We can help you to analyse your requirements and to develop the required design. Our engineers are best of breed hardware and software developers with understanding of financial and investment world requirements. Don't hesitate to contact us if you are interested to bring the power of FPGA into your projects!

Please check this document for more technical details about FPGA in financial world.

Get in touch
»...«
Follow updates

Join our social networks and RSS feed to keep up to date with latest news and publications