Catch-up: Difference between revisions
m (→Rewind to checkpoint: clarified what happens right after such a rewind) |
(integrated a couple points from tokumaru's explanation at http://nesdev.parodius.com/bbs/viewtopic.php?p=69022#69022) |
||
Line 1: | Line 1: | ||
The NES [[CPU]], [[PPU]], [[APU]], and [[MMC|mapper]] run in parallel (that is, at the same time). | The NES [[CPU]], [[PPU]], [[APU]], and [[MMC|mapper]] run in parallel (that is, at the same time), and NES games are designed with this in mind. | ||
Many of them modify the PPU registers and CHR bank numbers multiple times to produce scroll splits, curved roads, and other raster effects. | |||
But most emulators are programmed for a [[wikipedia:Von Neumann architecture|Von Neumann architecture]] that does only one thing at a time. | But most emulators are programmed for a [[wikipedia:Von Neumann architecture|Von Neumann architecture]] that does only one thing at a time. | ||
So an emulator must switch among emulating these components one at a time. | So in some sense, an emulator must switch among emulating these components one at a time. | ||
But this switching must be fairly fine-grained: an emulator can't just run the CPU for a whole frame and then run the PPU for a whole frame; otherwise, the raster splits won't be visible. | |||
The design philosophy of Nintendulator takes clarity and accuracy over speed; it emulates each component for one CPU cycle before switching to the next. | |||
But efficient emulators do some level of '''catch-up''', involving running the emulated CPU for several dozen cycles and then running the PPU and APU until they are synchronized. | But efficient emulators do some level of '''catch-up''', involving running the emulated CPU for several dozen cycles and then running the PPU and APU until they are synchronized. | ||
Keeping one component in the host CPU for a longer time speeds things up because the relevant data stays in the host CPU's fast registers and cache, not (slower) main memory, | Keeping one component in the host CPU for a longer time speeds things up because the relevant data stays in the host CPU's fast registers and cache, not (slower) main memory, | ||
Line 23: | Line 25: | ||
*[[APU Frame Counter|APU frame counter IRQ]] | *[[APU Frame Counter|APU frame counter IRQ]] | ||
*APU length counter status | *APU length counter status | ||
* | *Mapper IRQ, in many cases | ||
An emulator might make a rough prediction that slightly underestimates the time until that component sees the change, run that component for that amount of time, and then fall back to I/O catch-up or cycle-by-cycle emulation until the "important" event has happened. | An emulator might make a rough prediction that slightly underestimates the time until that component sees the change, run that component for that amount of time, and then fall back to I/O catch-up or cycle-by-cycle emulation until the "important" event has happened. |
Revision as of 22:51, 22 October 2010
The NES CPU, PPU, APU, and mapper run in parallel (that is, at the same time), and NES games are designed with this in mind. Many of them modify the PPU registers and CHR bank numbers multiple times to produce scroll splits, curved roads, and other raster effects. But most emulators are programmed for a Von Neumann architecture that does only one thing at a time. So in some sense, an emulator must switch among emulating these components one at a time. But this switching must be fairly fine-grained: an emulator can't just run the CPU for a whole frame and then run the PPU for a whole frame; otherwise, the raster splits won't be visible.
The design philosophy of Nintendulator takes clarity and accuracy over speed; it emulates each component for one CPU cycle before switching to the next. But efficient emulators do some level of catch-up, involving running the emulated CPU for several dozen cycles and then running the PPU and APU until they are synchronized. Keeping one component in the host CPU for a longer time speeds things up because the relevant data stays in the host CPU's fast registers and cache, not (slower) main memory, as long as the end result is as if the emulator ran all components cycle-by-cycle.
The basic technique looks like this:
- Find the next time that one component could affect another, such as the CPU writing to a PPU register or the PPU asserting an interrupt to the CPU.
- Run the CPU up to that time.
- Run the other component up to that time.
At the end of each frame (e.g. the start of scanline 0 or scanline 240), the emulator catches up everything and hands off the completed video surface and audio stream to the operating system.
Prediction
One basic technique involves predicting when each component will do something "important", like asserting an interrupt or changing a status register, and then running one component ahead until that time.
Some things can be predicted:
- Vertical blanking NMI
- Sprite 0 hit
- Lines containing at least 8 sprites that would trigger the overflow flag
- APU frame counter IRQ
- APU length counter status
- Mapper IRQ, in many cases
An emulator might make a rough prediction that slightly underestimates the time until that component sees the change, run that component for that amount of time, and then fall back to I/O catch-up or cycle-by-cycle emulation until the "important" event has happened.
Timestamping
Another technique involves remembering at what time (that is, what cycle) the CPU has written to each register, and then having the other component process the write as if it had occurred at that cycle.
But if a timestamp changes a prediction, you'll want to catch-up the other components instead of timestamping the write:
- Writes to PPU registers (especially $2004) might change the sprite 0 prediction.
- Writes to mapper or PPU registers might change the mapper IRQ prediction.
- Writes to APU registers might change the Frame IRQ prediction and the length counter predictions.
Scanline-based emulation
A scanline-based emulator is an emulator that uses a crude form of prediction and timestamping: something "important" might happen on each scanline, and timestamps are rounded to a scanline boundary. They run the CPU for one scanline's worth of cycles and then run the PPU and mapper for one scanline (341 dots), and after all scanlines are finished, run the APU for one frame. This isn't perfect, but it can run "well-behaved" games efficiently on emulators designed for old PCs or handheld devices. Most mappers that generate interrupts do so at some predictable point in the scanline, and few games use the APU interrupt or write to the same APU register multiple times in a frame (except possibly to write raw samples to $4011).
Rewind to checkpoint
This isn't as important for the NES, but in systems with multiple CPUs that can interrupt each other (like the Super NES with an SA-1 coprocessor), prediction is far more difficult. So an emulator can set a checkpoint on CPU A, predict that CPU B will not assert an interrupt, and run CPU A for that time. If it turns out that CPU B raised an interrupt, the emulator can rewind CPU A to the checkpoint and run it until the time the interrupt occurred.