# What is Oscilloscope Memory Depth?

This article will attempt to explain oscilloscope memory depth; both efficiently, and effectively.  I’ve recently read a few articles about oscilloscope memory depth on forums, blog posts and what are supposed to be informational articles, but I haven’t really seen anyone describe what it actually is. It’s both simpler, and more complicated than what most of these articles present.

Let’s start with the simple explanation of the numbers. I’ll use the SDS1202X-E, and the DSO5202B as the reference points, mainly because I think they’re perfect contrast examples, and similar price points.

The relevant specs on the SDS1202X-E are:

• Real Time Sampling Rate 1Gs/s
• 14Mpts Oscilloscope Memory Depth

The relevant specs on the DSO5202B are:

• Real-time sample rate: 1GSa/s
• 1Mpts Oscilloscope Memory Depth

Now, I’ll need to describe a summarized version of what all oscilloscopes do. Each channel has an IC that reads an Analog (real voltage) value, and converts that value to a digital value. Say for instance, you have a 4 bit ADC, that means 2^4 values, or 0-15. Using that ADC, if you were measuring 2.5 volts on a value scale of 0-5 volts, the ADC would be reporting the value of 7 or 8. Similarly if the ADC was 8 bits, which has a range of 0-255, the value would be 127 or 128. This logic is the same for all ADCs.

In an oscilloscope they typically use an ADC usually with a lower bit value, but a really high sampling rate. The sampling rate, is the amount of measurements it’s taking per second. For instance, if the sampling rate is 250Ms/s, this means every 4ns, the ADC measures a value and passes the digital value on to another IC. Similarly, if the sampling rate is 1Gs/s, the ADC would be measuring the analog value every 1ns, and passing the digital value on. If you take a very large amount of samples, you can build a reasonable extrapolation of the real-time voltage you’re probing. Without digressing into a much more complicated topic, that’s why the higher the sampling rate, the faster the rise time you can measure accurately, and consequently the bandwidth.

The next stop is to the FPGA(field programmable gate array). This is a specific type of IC that uses logic gates to reconfigure itself into whatever you need in hardware. These unique ICs are used because of the extremely high data processing rate required, predicated by the value coming from the ADC. The goal of these ICs is to do something useful with the data, without losing large amounts of the data at the same time due to the timing of each sample. An added bonus then is not having to design the IC in silicon. Which leads us to the silicon designed alternative to using a FPGA, an ASIC(application specific integrated circuit). Both will have identical functionality, but some simple differences are: one, ASIC is silicon, meaning it’ll never change and therefore has zero potential for bug fixes; two, at scale an ASIC is significantly cheaper. This is also the reason why CPUs inside computers aren’t FPGAs. It would be pretty awesome if they were, and I have no doubt they will be one day, but as of right now a CPU at \$150 would probably be in the thousands, if not tens of thousands of dollars for the same capability. However, in an oscilloscope’s case, since not everyone buys a single ASIC from one supplier (think intel), each company needs build their own hardware. Since the quantity produced by each manufacture is very limited, the natural course is to use a FPGA. The big dogs of the oscilloscope industry (think tektronix), use an ASIC, which does result in two things; higher cost, and hopefully no bugs. Since they only get one shot, they’ll spend far more amounts of time perfecting the design, but bugs are inevitable.

Back to the logic flow, the FPGA receives the digital value from the ADC, stores them in fast-memory, decodes the bit value, then performs the necessary functions. When finished it re-encodes the data to pass onto the computer thru the bus. In nearly every oscilloscope, this computer comprises of a RISC CPU(think ARM), some DRAM, and flash memory. Now we’re at the pertinent part of fast memory. Typically the FPGA is linked with some type of SRAM(static RAM), which is the fastest and the most expensive type of RAM per byte due it’s design. In a computer, this type of RAM is typically used in small capacities as the CPUs cache, while the slower type of RAM called DRAM(dynamic RAM) is used for the larger portion of memory. When people use the words RAM or memory when talking about computers, it’s the DRAM they are usually referring to, as there are many different types of memory and RAM in a computer. Because we need the faster fresh rate for the super fast samples rates though, we have to use SRAM, and this explains the price hikes for more memory in an Oscilloscope. If we used the onboard computer and DRAM to store the values, it would be so slow, the effective bandwidth would in the few MHz. You’d never be able to reach the 50+Mhz, let alone 200Mhz, 1Ghz and beyond. These types oscilloscopes do exist though, and are much, much cheaper.

So to recap, the 8 bit ADC measures the voltage, then sends that information to the FPGA, and the FPGA stores those values in SRAM for processing. In many cases to cut cost, some manufacturers will implement a simple SRAM inside the FPGA if they have unused logic gates. This is why you’ll see some oscilloscopes with kilo points worth of memory. Now that we have some foundational knowledge, we can figure out just what the sampling rate, ADC bit value, and memory depth all translate to.

If the sampling rate is 1Gs/s with an 8bit ADC, and you have 1Mbits of fast-memory, you can store .000125 seconds, or 125uS. (“memory” / “bit value”) / “samping rate”. Similarly, if you have 2Mbits of fast-memory, that value becomes 0.00025 seconds, or 250uS. This is all assuming the refresh rate of the SRAM is at least equal to, or faster than the sampling rate. Obviously, an oscilloscope can do much more than that, and have different time intervals, so how does it do it? This is where the FPGA processing information comes in, and reveals how much effective memory you need for each scenario.

The first component to examine is div/s on an oscilloscope. Using the picture above, each vertical line -think longitude- is a division. On the current screen the horizontal division is 1ms. So starting from the center line, we have 7 divisions before and 7 divisions after the trigger that we’re analyzing. Therefore, using the division setting of 1ms, that translates to 7ms before, and 7ms after, for a total of 14ms of analysis. This is where we use the formula, sampling rate of ADCs * time per division * total divisions = memory required. In our case above, that’s 1Gs/s * .001s division rate * 14 total divisions = 14Mpts. As you can see from the formula, the memory required increases significantly if the time per division increases, since the other two values are hard set at manufacturing time. We can even work backwards with the knowledge we have to work out the machine has approximately 112Mbits of fast memory. This could be implemented in the FPGA, or a separate IC, or even more likely a combination consisting of the unused FPGA logic gates, plus an additional IC.

If we compare that to the Hantek unit, let’s solve for maximum division rate. 1Gs/s * 16 total divisions / 1Mpts = 1s / 16000 => 62.5uS division rate. That’s a pretty huge difference, even if we compensate for the display difference and set the total divisions to 14, we still get the value of .0714ms. At 16 divisions though, that means we can analyze a rise time that is less than 62.5uS in a single division, but due to bandwidth must also be greater than 5nS. The only way then to get larger divisions is by lowering the sampling rate. So, if you set the division rate to 1ms with the Hantek, you’d get 62.5Ms/s. Likewise, if you want the division rate of 2ms, the sampling rate must be set to 31.25Ms/s, and the logic continues. If your application is within this range, it’ll work for you. Otherwise, if you need a longer amount of time, with the same sampling rate, you’ll need a larger amount of oscilloscope memory depth. Consequentially, if you need to analyze a faster rise time, you’ll need higher samples per second, which drastically increases the necessary oscilloscope memory depth.

I hope this article has been helpful in the attempt to breakdown and explain why oscilloscope memory depth is important to measure at larger time intervals, or higher sampling rates with the added benefit of explaining a little computer theory, and a little electronics theory while we’re at it.