More Details About TCP

So first off, I have merged my code to master, see it here:

And while I work on updating the file, here are 2 slides for you to look at:

So, if you have the appropriate NI hardware, you can clone down this repository and run 1 TCP and 1 UDP session to your FPGA!

Do you have an existing FPGA solution? If you follow the Vivado Project that is exported from LabVIEW FPGA, you can probably import your existing FPGA solution to National Instruments hardware, plug in the lwip TCP/IP code that I have running on a MicroBlaze core and you will have a very rapidly customizable FPGA solution, now with TCP and UDP support.

If you already are using a TCP/IP core with your FPGA, why not give this a try? Importing cores is not that hard with LabVIEW, and if you have one why not give it a shot?

TCP is Now Working as Well!

As I was traversing my mental decision tree using a depth-first search model of digger deeper and deeper in to the lwip TCP/IP source code, I thought to myself that I should go back up the decision tree and take a deeper look at the tcpdump output.

It turns out I was setting the source and destination IP addresses to the same value. Once I set the source IP address to NULL, the TCP/IP stack auto-binded it to the IP address of the MAC and all communications started working.

Now I will do some code cleanup, will push to github and then get started on a simple demo of how this works.

After that… I will put this logic inside the FPGA and show how this can help you with your trading applications.

Code is a work-in-progress and is on the addTcp branch

Milestone Reached! UDP end to end

What I have working right now:

  1. A UDP packet enters the FPGA via the 10 Gigabit PHY that is connected to the FPGA.
  2. The 10 Gigabit Ethernet MAC running on the FPGA consumes the Ethernet Frame and passes it in to a c++ application running inside the MicroBlaze Processor.
  3. The MicroBlaze Processor is running a version of the open-source TCP/IP stack “lwip” which processes the UDP datagram, extracts the pertinent information and passes this information to a specific callback function that is implemented by the user. In this case that would be me.
  4. My implementation of this callback function sends a session identifier along with the payload out to the FPGA via an AXI FIFO.
  5. The FPGA implementation then takes this packet and forwards it to the host by using a LabVIEW Target-to-Host DMA FIFO.

Okay. So the end goal is to not send the UDP payload up to the host, but to send the UDP payload to another loop inside the FPGA. This loop will do some sort of analysis or “trading” with it. Additionally, instead of using UDP, I am currently working on the TCP version, I am just trying to figure out the lwip “RAW” TCP interface.

All code has been merged to the master branch, see github:

Here is a Crude Diagram

Here are a few pictures of my actual set up:

Linux machine that is connected to my regular network and directly to the FPGA board via a separate 10 Gigabit Mellanox ConnectX-2 nic. The blue cable is connected to my regular network.
National Instruments PXI chassis with integration controller running Windows 7 which controls the PXIe-6592 board where a 10 Gigabit port is wired directly to the Linux machine pictured above. The blue cable is connected to my regular network here as well.
Entire setup. The 10 Gigabit card in the linux server on the left is connected directly to the first port of the PXI-6592 board inside the PXIe chassis on the right

What’s Next?

  • Implement TCP server
  • Create a nice PowerPoint describing the architecture
  • Consume UDP and TCP data on the FPGA, and do something that is ‘low latency’

Install Xilinx Vivado Tools on Fedora 27

The National Instruments “LabVIEW 2018 FPGA Module Xilinx Compilation Tool for Vivado 2017.2 – Linux” is only officially supported on Red Hat Enterprise Linux and CentOS. CentOS is basically a clone of Red Hat Enterprise Linux, also known as RHEL. I like to joke and call it R-HELL.

The reason is valid, Red Hat Enterprise Linux has a longer release cycle, which means that a new package won’t just show up and break a lot of your code, or more importantly will not introduce Security Flaws in to your system just because you ran a system update.

But what about for non-Enterprise users who are not using these tools in a corporate environment and what about those working at small startups?

Well, I tried installing this package on my Fedora 27 machine, and after some googling I found that all you have to do is get the following package installed for it to work:

sudo dnf copr enable mjg/libtiff3
sudo dnf install libtiff3


So it appears to me that Monero/CryptoNote mining has gained a lot of popularity lately, and this has led many people to this website.  Let me note that LabVIEW FPGA is a proprietary tool that comes with a 30-day Evaluation.  After that, you have to spin up a new virtual machine and reinstall LabVIEW to keep your installation alive.  However, when it comes to FPGA development, LabVIEW is the best tool out there.

Sure, it may not support the latest board from Xilinx or Digilent or whoever, but what it does support it supports well.

Take the PXIe-6592 board.  I was able to take this board and to implement stage 3 of the CryptoNight algorithm while I was on a “diversion” from my normal FPGA development side projects.

Now I am hearing that Monero is profitable, but only via an FPGA.  Well, let me go back to my source code and see if I can provide some benchmarks.

If You Buy This Board, You Can Run This

If you purchase the National Instruments PXIe-6592R Board, retailing at $12,197.00 USD, I guarantee that you can run an FPGA accelerated 10 Gigabit network card in as much time as it takes for you to synthesize your code!  Call Now, the number is 1-900-XXX-YYYY.

Batteries not included, strings attached.  But seriously, I have just cleaned up the code and was able to run this from its new home, namely a brand new directory inside my kitchen sink of LabVIEW samples.

Download the source code and take a look at it here:

You can take a look at the MicroBlaze design, you can look at the MicroBlaze C++ code, or you can look at the LabVIEW code.  I removed all of the other vi’s that are not needed for this specific example, so browsing this code should be easy.

Please note however, that this project is what I will slowly morph to have a TCP/IP stack implemented inside the FPGA MicroBlaze and not a pseudo TCP/IP stack in software.

Additionally, only Windows drivers are available for use for this specific board developed (again) by National Instruments.  I have successfully (and legally) engineered my own device drivers for other boards in the past from NI.  You too can write your own drivers to port this to Linux or IBM or whatever hardware platform of your choice.  However, in order to do this you would have to spend a certain amount of $ on hardware, or be really good at convincing people to give you things…

Finally, to run this code, you would need the following installed on your system:

And that’s it! Oh, and you would also need a PXIe chassis to house this board, but if you order one of these boards your sales representative will recommend one for you.  I went real cheap and got a used PXIe-1062Q for around 750$.  And the Q stands for “quiet”, and believe me, it is not quiet at all, so imagine how loud the normal version is!  (Remember, this hardware is usually military hard and capable of running on things like airplanes, satellites, Humvees in the desert, so the noise it makes it most definitely acceptable, but don’t expect to meditate with this thing on)


NI Week is coming up fast, this means a new version of LabVIEW and LabVIEW FPGA will be available for download sometime in May of 2018.  This means we may get a nice upgrade to be able to use this board with a later version of the Xilinx Vivado Tools.  So stay tuned and check out for more information!

Coding Standards Matter…

I have wired up the components of my 10 Gigabit FPGA Accelerated Network card with great care, and I decided to have my “tester” application skip the lwIP stack and to pass the received packet directly to the host for testing/verification purposes.

Everything was checking out fine, the LabVIEW code looked flawless, the interface to the 10 Gigabit Transceiver was perfect.  All looked fine, but for some reason I was not receiving the packets on the host.

I analyzed the code, inserted probes and what not.  And finally, I was reading through the actual C++ code (MicroBlaze C++ that is) I found the bug.

A very simple bug hidden in plain sight!

// Now echo the data back out
if (XLlFifo_iTxVacancy(fifo_1)) {
XGpio_DiscreteWrite(&gpio_2, 2, 0xF001);
for ( i = 0; i < recv_len_bytes; i++) {
XGpio_DiscreteWrite(&gpio_2, 2, buffer[i]);

XLlFifo_Write(fifo_1, buffer, recv_len_bytes);


XLlFifo_iTxSetLen(fifo_1, recv_len_bytes);

Do you see the error?  Well, neither did I, until I read the documentation for XLlFifo_Write again, for the umteenth time… I was writing the data of the packet to the buffer (length of packet) squared times! Why? Because the single call to XLlFifo_Write is writing the entire packet on each call.

Anyway, I am now re-synthesizing my code and we will see what happens when I run it in around 2 hours time.

Also, I added the TKEEP signal to my AXI Stream FIFO, and it worked exactly as expected, meaning that:

  • If I send 12 bytes from the LabVIEW FPGA FIFO in to the MicroBlaze, it detects 12 bytes
  • If I send 13 bytes, with the TKEEP signal being 0b0001 for the last word only, and 0xF for the rest, I get 13 bytes in the MicroBlaze code.
  • If I send 14 bytes… and so on and so forth, MicroBlaze recognized only that many bytes.

However, everything was aligned to 32 bit words.

Maybe I will work on cleaning up and pushing some of my code to github while I wait…

10 Gigabit FPGA-based Network Card

So here is the most simple, FPGA-based Network Interface Card that I know of.

This application will start Port 0 of the 10 Gigabit Network interface that is provided by the PXIe-6592R ( board by National Instruments, and will allow you to do any of the following:

  • Check if any new ethernet frames have been received, and display the information, including the raw bytes of any such received frame
  • Send a raw ethernet frame out of Port 0

I have included the necessary code to parse and generate the following types of packets, enabling you to communicate with another computer on your network that supports:

  • Ethernet II
  • ARP
  • ICMP
  • IPv4
  • UDP

The VI’s to do this are located in the directory “Tests/MAC/Protocols”, simply wire the incoming frame data to the “Parse” VI’s, or write the parameters in to the “Create” VI’s.

How to Parse Incoming Ethernet Frames

For an example of how to parse an incoming frame see the “Poll RX” case inside the bottom While Loop of the “MAC-Tester” vi:

How to Create Ethernet Frames

For an example of how to create a valid outgoing ethernet frame with a valid CRC32 on the end, see the “Transmit Packet” case inside the bottom While Loop of the “MAC-Tester” vi:

This vi calls the “” and wires the size – in bytes – and the frame data in 64-bit words to the transmit FIFO.

Full Source Code

See the source code on GitHub here:

See the for more documentation.


Now I have to take this code and wire it up to my MicroBlaze implementation that also sits inside the FPGA project.  Only problem right now is that I have only figured out how to configure a 32-bit FIFO, and not a 64-bit FIFO.  So I can either do some sort of translation inside the FPGA or hope and get lucky by configuring the FIFO to be 64 bits wide.  Note: by FIFO, I am referring to an AXI-Stream FIFO.

10 Gigabit FPGA-based Network Code Coming Soon

I am getting real close to finishing my proof-of-concept FPGA-based network card that is based on the PXIe-6592 National Instruments Board which uses the Kinex-7 410t FPGA chip by Xilinx, and has 2GB of DDR3 RAM.

Using the Arty Arix board, I was able to make sure that the MicroBlaze code running the lwIP TCP/IP stack works fine, and I was able to use a NI example to make the 10 Gigabit Ethernet MAC part.  Only issue is that the NI code is quite complex and uses features and ideas that I have never seen before.

Nevertheless, I am iterating over some modifications to the example to allow for a LabVIEW Host network stack that uses the FPGA only for the sending and receiving of ethernet frames.  Once I get that working, I will just switch the connection from LabVIEW Host to the on-board MicroBlaze.

How to Multiply 64 bit Numbers in LabVIEW

What is the product of 0x9D0BF6FDAC70AB52 and 0x6408F6540A1384CB?  Well, according to LabVIEW for Windows, the answer is 0x2D90DE07C0C42206.  According to C++ on OSX (without any optimizations, usage of Intel Intrinsic functions), the answer is also 0x2D90DE07C0C42206.

The real answer is…  0x3D5E2BF7DCBCA6622D90DE07C0C42206.

How do you get this number? You have to use compiler intrinsics, or calculate this value yourself.  LabVIEW does not make it easy to call an Intel Compiler intrinsic, so I took it upon myself to implement this myself.  Here is a screenshot of the implementation in LabVIEW for Windows:

To download and use this code in your project, see:

Note: FPGA version is coming soon, but I am busy working on something else right now