OpenGL is not so easy to use. The API exposes thousand of functions that are grouped into extensions and core features that you have to check for every single display driver release or the 3D application may not work. Since OpenGL is a graphics library used to program cool gfx effects without a serious knowledge of the underlying display driver, a large range of developers is tempted to use it regardless of the technical problems. For example, the functions are loaded "automagically" by an external loading library (like glew) and they are used to produce the desired effect, pretending that they are available everywhere. Of course this is totally wrong because OpenGL is scattered into dozens of extensions and core features that are linked to the "target" version that you want to support. Loading libraries like glew are dangerous because they try to load all the available OpenGL functions implemented by the display driver without making a proper check, giving you the illusion that the problem doesn't exist. The main problem with this approach is that you cannot develop a good OpenGL application without taking the following decision:
- How much OpenGL versions and extensions I have to support?
From this choice you can define the graphics aspect of the application and how to scale it to support a large range of display drivers, including the physical hardware and the driver supported by the virtual machines. For example, VirtualBox with guest addictions uses chromium 1.9 that comes with OpenGL 2.1 and GLSL 1.20, so your application won't start if you programmed it using OpenGL 4.5, or even worse you won't start also on graphics cards that support maximum the version 4.4 (that is very recent). For this reason, it's necessary to have a full awareness of the OpenGL scalability principles that must be applied to start on most of the available graphics cards, reducing or improving the graphics quality on the base of the available version that you decided to target. With this level of awareness, you will realize that you don't need any kind of loading library to use OpenGL, but only a good check of the available features, that you can program by yourself. Moreover, libraries like glew are the worst because they are implemented to replace the official gl.h and glext.h header files with a custom version anchored to the OpenGL version supported by that particular glew version.
This means that if your project is anchored to a certain version of glew that covers at maximum the latest OpenGL version, when the OpenGL 4.6 or 5.0 will be released, you have to wait for the next version of glew in order to support them, if they will be ever released. But on top of that, you cannot include directly gl.h or glext.h, and it sucks. You have to start a init function on the main thread without a precise knowledge of what the library is doing at your place, and it sucks. There is no support for features available on multiple contexts created with different adapters, and it sucks. And if you consider that loading the functions is such a precise job that cannot be delegated to a third-party library that does it in your place without making a precise classification of the available features and the target versions, you can realize that a loading library is really useless and sometimes dangerous. And the work is so simple. If you want to load a function, you have to use wglGetProcAddress and think about the name of the function. For example, if you want to load glBindBuffer, the function pointer contained into glext.h is PFNGLBINDBUFFERPROC:
PFNGLBINDBUFFERPROC _glBindBuffer = (PFNGLBINDBUFFERPROC) wglGetProcAddress("glBindBuffer");
"PFN + Function name (upper case) + PROC". This is all you need to remember... that's it! You can do this work manually, wherever it is required, checking if the function is supported by its relative extension or if it is a core feature. If you find this work boring, you can create a simple script in Python that generates all these wglGetProcAddress for you from the gl.xml of the latest OpenGL version, selecting just the part that you want to support into your implementation class.
But before that, you have to chose the minimum version that you want to support, deciding the degree of scalability on the base of the application that you are about to produce. If it's a game with an average graphics and a brilliant idea about the game play, I'd say that it should start on most of the available hardware on the market (including the virtual machines), that means at least OpenGL 2.1. This version is a reasonable choise because you have Vertex buffer objects (VBOs), Element buffer objects (EBOs), Vertex array objects (VAOs) and the shading language (GLSL) at version 1.20, that is pretty decent to produce a good amount of graphics effects. If you decide to support only this version, most of the job is done. Otherwise, if you want to complicate your life, you can decide to target also OpenGL versions 3 and 4, and support them if they are available.
In this case, the best suggestion I can give you is to organize your work into different categories, making a class for each OpenGL version that you want to support, including the major and the minor versions, taking a series of decisions. For example, you may want to support only the latest OpenGL 3.3 version and the latest OpenGL 4.5 version, or every existing minor versions for both 3 and 4 major versions. Considering the most complex case, you can have an abstract class OpenGL from which derive all the implementation classes, with the supported OpenGL versions. For example:
class OpenGL; class OpenGL21; class OpenGL30; class OpenGL31; class OpenGL32; class OpenGL33; class OpenGL40; class OpenGL41; class OpenGL42; class OpenGL43; class OpenGL44; class OpenGL45;
Inside these classes, you can implement the right functions, choosing the right balance between the graphics effects and the available extensions. If you think that this work is mad, you have to consider that there are extensions promoted to different versions of the OpenGL core. If you know the target version, you may avoid a useless check to see if a function is ARB or non-ARB, if it has been promoted into the core. For example, on OpenGL 1.5 you don't need to check if GL_ARB_vertex_buffer_object is available, because glBindBuffer is already a core feature, meanwhile on 1.4 you need to check the extension and use glBindBufferARB instead, so you have to be careful when you make your decision because all these functions may vary from a version to another.
When the decision is made, you can write a simple script in Python that generates all the loading procedures from gl.xml and a configuration file with the desired extensions for each implemented versions. The core feature functions can be loaded without any kind of extra check (apart the OpenGL version), while generic, ARB-approved, vendor-specific and core extensions always need to be checked before loading the respective functions.