I created a video to show the potential of the Vulkan renderer included in my framework. Now the engine can import 3D models from several 3D formats (including collada) and render them with Vulkan libraries.
The engine is equipped with a proprietary material system and mesh format. The materials of the imported models are converted to the engine's materials format which is then converted into the shaders that are used to make them work. As you can see from the video, the engine is already capable of rendering millions of triangles at high framerate and resolution (3840x2160).
Now the engine in Vulkan can draw text with bitmap rendering. The engine checks which characters are on the screen and it creates dynamically the textures only for the fonts to be drawn, with the correct size and aspect.
The characters are precalculated into bitmap with the FreeType library and loaded into Vulkan textures only if needed. When the text is not rendered, the font is deallocated to reserve space for other resources. In this way, fonts can be drawn as normal polygons with textures, without an important impact on performance. The algorithm for text rendering is capable of drawing text with different indentation formats, including the "justified" one that you see in this video, like in any other word processor. The GUI is drawn with the GPU and it can be used for video games or 3D applications which require advanced performance and functionality.
Skinned mesh rendering is a fundamental part of every modern 3D engine, so I couldn't avoid to implement it. The skinned mesh with weights, indices, bones, skeleton, and animated nodes is imported with AssImp library into my format. I added weights and indices to the vertex attributes while bones matrices are written into a shader storage object. The skinned mesh is computed with the GPU, by the vertex shader.
In the video you can see the final result of the implementation. The model has been imported from Doom 3 format into my format, then animated and rendered by the 3D engine. For now, the keys with quaternions are interpolated with a slerp for every frames. An optimization can be to pre-calculate all the bones into a SSBO at fixed frame rate (like 60 fps) and use it to render a massive amount of meshes.
I improved the material system, introducing the lighting stage. I removed the fragment stage and replaced it with color and lighting stage. The output color is calculated as the sum of the color stage and the lighting stage. The color stage has only one material node as input that is used to produce the output color for this stage.
The lighting stage wants more inputs, like ambient, diffuse (or albedo), specular, roughness, metalness, that are mixed in a physical based rendering (PBR). Each input is connected to one material node that can be the result of the operation between more material nodes, so every stage can have it's own textures or the math operation between more textures, uniforms and constants. In the video you can see a model with advanced materials imported to show the benefits of the last optimizations. In this case, the ambient stage is rendered correctly and mixed with the diffuse textures.
Now the importer with AssImp library is capable to import model materials and textures into my format. I added also support for normal maps with tangent and bitangent vertex attributes, improving the lighting stage in the fragment shader to render it properly.
In the video you can see nano suit model imported from collada format. As the object rotates, you can see the benefits of bump mapping and specular textures.
I decided to use AssImp library to import models from other formats to my 3D mesh format. The video shows a first implementation of the importer.
Vertices and normals are converted along the skeleton structure, while the red material is generated just to render the model on the screen. The next step is to load the materials and the associated textures.
Finally, the very first 3D model rendered by the 3D engine. Even if it looks like a simple torus demo, the main feature this time is the format used for the 3D mesh and the convertion from material nodes to vulkan shader, for the rendering.
The mesh is composed by a polygon hull, a set of vertex attributes and a layout that defines the nature of vertex attributes. The polygon hull represents the geometric structure of the mesh while vertex attributes define the graphics and the physical aspect. A mesh can have virtually any number of vertex attributes, that can be: position, normal, colors, texcoords and other new attributes used by the material.
Materials are composed by expression nodes, then converted to shaders in a second step. Every material has a layout with the number of vertex attributes required for the rendering. The material structure used to render this model is the following:
The layout of the mesh doesn't have to match exactly with the material's one: if the mesh has the required vertex attribute then it's used, otherwise 0 values are used instead. It's for the material to decide how to use the vertex attributes offered by the mesh. In this way, a single material can be used to render any kind of mesh. Of course, a mesh without normals cannot render diffuse or specular, or without texcoords cannot render textures, normal maps and so on.
Uniform buffers can be used by a single mesh to change the material content, like colors or texture coords. For instance, the color of diffuse in this material can be connected to a uniform contained by a 3D mesh, that can be changed on the fly, changing the color of the object. In this way, it's possible to reuse the same materials for multiple objects, even with different aspects, like particles or game characters.
I improved the implementation of materials and textures with Vulkan. Now every material is translated into a GLSL shader that is converted into SPIRV code with shaderc library. The shader is generated along the graphics pipeline to match the material settings. For now materials are very simple and used to draw an image texture with alpha blending or a filled color.
As you can see from the video, now the gui has normal appearence instead of rainbow rectangles of before. The next step is to support path rendering and font rendering, for drawing the text. In the future, the same materials system will be used to draw 3D content too.
I am happy to announce that Vulkan library has finally been integrated into my framework. For the moment nothing complicated, I limited myself to implement a specialization of the Graphics Context that draws simple colored rectangles instead of the images drawn by Cairo library. It's possible to invoke drawing commands with the same degree of complexity and practically identical management of textures, materials and uniforms, at programming interface level.
Each rectangle is associated with a transformation matrix, which is translated into a uniform buffer. It's also possible to rationalize the rendering into multiple layers, allowing the reuse of command buffers with a minimum programming effort.
As you can see from the above image, the 2D GUI based on the graphics context worked quite well. It's possible to drag the windows and see them move on the screen at a high framerate, which is the main purpose for which it's worth bothering the Vulkan libraries.
For the moment there is an implementation of textures and materials, but I have not yet finished the rendering part at shader level. The difficulty lies in the fact that the framework must resolve the material nodes to extract the proper GLSL shader to be converted into SPIRV, create a suitable graphics pipeline and set it before rendering. The next step is to finish this part and make the 2D GUI identical to Cairo version.
Then I can proceed implementing 3D functionality, with a full material management. The main goal is to implement an importer with assimp library and load 3D models. Then I will proceed refining the 3D functionality with a sophiticated engine optimized for modern real-time computer graphics.
Finally I came to a first working version of the 2D GUI based on the Cairo libraries. The entire GUI architecture is based on 2D Engine components like the graphics and the physics engines. The graphics engine makes use of graphics context that in this implementation is based on Cairo, but it can be specialized with any library.
As you can see in the video, I reused some old skin from WindowsXP, but the skin is totally programmable and it will be changed in the future. For now there are only simple widgets like: form windows, buttons, options and check boxes. The next step is to implement other composed widgets like scroll bars, text boxes, tabs, lists, treeviews and so on. This GUI can be used for video games or to produce professional applications. The GUI is designed to run on full screen or using the widgets of the operating system. The full screen variant can be specialized to work with GPU libraries, like Direct3D or Vulkan. As a modern feature, a transform matrix can be applied to every widget, so they can be translated, rotated, scaled or skewed with matrix operations. The interface can be designed with an external editor and not with code embedded inside the application. The only code required on the application side is the one used to manage the widget events.
Having a graphics context to draw something on the screen is not enough when you have to deal with complex scenes made of many textures, materials, shapes and assets of any kind. This is the reason why at some point of my framework development I introduced the concepts of Scene, Engine and Resources. Basically, a scene is a collection of elements, that can be 2D or 3D objects like shapes or meshes, while the Engine is a component used to handle the scene and resources is a set of textures, materials and assets. All these kind of resources are referenced by elements with UUID strings.
I implemented different kind of Engines. The 'Generic' Engine is used to pre-process the scene to prepare it for the rendering or eventually for other kind of operation, like collision detection. When the generic engine iterates over the scene, all its internal geometries are transformed for being placed on the screen. The 'Graphics' Engine translates the transformed scene into a series of draw commands for the graphics context. The picture of above shows a simple test of the Engine, with an element that is a 2D shape composed by three sub-paths (1 contour and 2 holes), with a radial texture material for fill and a color material for the external stroke. Even if this test is simple, the Engine is designed to handle far more complex scenes and it will be used to create a whole 2D GUI from scratch.
In my framework, I implemented materials for being extremely scalable. First of all, I decided to abandon the old format similar to 3D Studio Max or Maxon Cinema 4D and adopt another format more similar to UE4 that is based on Visual Expression Nodes, where one node in this case is called "material component".
A material is composed by different stages: displacement, fragment, blend and radiance. Every stage has parameters and a single component in input, that can be a texture with texture coords, diffusion with lights and normals or the combination of more components with "add" or "multiply" nodes.
If program shaders are supported by the graphics context specialization, the material is translated into a program shader, otherwise it will be rendered as best as possible, with the component types supported by the graphics library. Continue reading →
I implemented a set of classed to handle system windows and events. Now it's possible to open a window and draw an image inside. I also programmed an abstract class for graphics context to handle graphics functionality in common with the most important graphics libraries, like DirectX, OpenGL and Vulkan, even if the first specialization of the context is making use of Cairo library to support via software rendering.
The abstraction layer makes the context compatible to the feature available from the graphics library that is specializing it. For example, Cairo has support for linear and radial patterns and path rendering, but no other patterns can be programmed with program shaders. If not supported by the library, some features is returne as not-supported by an enum function exposed by the abstract class. In this way, the component that is using the rendering context is aware of the features that are available and can make the best use of them. The image shown by the example, is a demo written with the specialized class that makes use of Cairo library, with linear pattern and path rendering.
One of the most important component in a framework is a cross-platform loader of dynamic libraries. Without it, you cannot access to the functionality of external dynamic libraries like OpenGL, DirectX or Vulkan, or at least you may have to add extra code for every library on every platform you have to support. In some cases it's better to not statically link a dynamic library and use LoadLibrary() or dlopen() instead. With this component, I don't have to worry how the library is linked and what platform or operating system I'm about to support, the effort of loading and linking an external library is very little. After that, I decided to use this component to dynamically link DevIL and implement a full support of image conversions with this library. I implemented also a full set of classes to handle 2D shapes and 3D objects.
Another fundamental component for every 2D or 3D engine is the graphics context. In my framework, a graphics context is an abstraction layer of functionality exposed by the rendering context of a graphics library, like OpenGL or Direct3D. Once I defined a full set of draw commands for drawing 2D shapes and 3D objects, I made a first specialization of this interface using the Cairo library with path rendering for drawing 2D graphics only.
Even this framework has been designed for generic purposes, it will be used to program basically graphics applications. In this perspective, I implemented a full set of serializable classes to handle complex numbers, vectors and matrices and all the geometric operations that will be used to realize a 3D engine.
To serialize some enum variables that want constants instead of numbers, I introduced "constant strings" (i.e. LEFT, GREATER, NULL) in human readable formats like xml or json. In this case, when the variable is deserialized by the framework, a constant string will be translated into his respective numberic value, on the contrary the numberic value will be translated into his constant string during the serialization process.
For instance, an extended vector 2D with anchor variables: