By no means complete or up to date, but a nice entry point is this presentation:
The two basic approaches are rasterization and ray tracing, but they are not exclusively used for realtime or offline rendering. In the past, algorithms closer to Rasterization than to raytracing were used in offline rendering as well, for example Scanline renderers and the REYES architecture used in Renderman. And nowadays ray tracing is used at interactive framerates. There are illumination models and approaches for global illumination, that can be used with both architectures and then there are many different variations of raytracing (like path tracing and so on…). There is a whole zoo of methods, so it will be hard to get a complete overview quickly, but looking into rasterization and raytracing will give you the basics. Now, the situation gets even more complicated as Deep Learning enters the scene. Not only does it allow to reduce noise, it can even be used for image synthesis on its own. See for example Deep Illumination: Approximating Dynamic Global Illuminationwith Generative Adversarial Network
If anyone has a good paper on rendering algorithm categorization or a diagram that relates all (or many of) the known algorithms I would be interested as well.