Author Archives: admin

TextureMind Framework – Progress #3 – Graphics context and external libraries

One of the most important component in a framework is a cross-platform loader of dynamic libraries. Without it, you cannot access to the functionality of external dynamic libraries like OpenGL, DirectX or Vulkan, or at least you may have to add extra code for every library on every platform you have to support. In some cases it's better to not statically link a dynamic library and use LoadLibrary() or dlopen() instead. With this component, I don't have to worry how the library is linked and what platform or operating system I'm about to support, the effort of loading and linking an external library is very little. After that, I decided to use this component to dynamically link DevIL and implement a full support of image conversions with this library. I implemented also a full set of classes to handle 2D shapes and 3D objects.

Immagine correlata

Another fundamental component for every 2D or 3D engine is the graphics context. In my framework, a graphics context is an abstraction layer of functionality exposed by the rendering context of a graphics library, like OpenGL or Direct3D. Once I defined a full set of draw commands for drawing 2D shapes and 3D objects, I made a first specialization of this interface using the Cairo library with path rendering for drawing 2D graphics only.

TextureMind Framework – Progress #2 – Improve serialization and math classes

Even this framework has been designed for generic purposes, it will be used to program basically graphics applications. In this perspective, I implemented a full set of serializable classes to handle complex numbers, vectors and matrices and all the geometric operations that will be used to realize a 3D engine.

To serialize some enum variables that want constants instead of numbers, I introduced "constant strings" (i.e. LEFT, GREATER, NULL) in human readable formats like xml or json. In this case, when the variable is deserialized by the framework, a constant string will be translated into his respective numberic value, on the contrary the numberic value will be translated into his constant string during the serialization process.

For instance, an extended vector 2D with anchor variables:

enum PositionAnchorEnum {
    TMD_POSITION_ANCHOR_LEFT = 0,
    TMD_POSITION_ANCHOR_RIGHT = 1,
    TMD_POSITION_ANCHOR_TOP = 2,
    TMD_POSITION_ANCHOR_BOTTOM = 3,
    TMD_POSITION_ANCHOR_NEAR = 4,
    TMD_POSITION_ANCHOR_FAR = 5
};

template <class T>
class ExtVector2 : public Vector2<T>
{
public:
[...]
    T m_x:
    T m_y;
    PositionAnchorEnum m_xAnchor;
    PositionAnchorEnum m_yAnchor;
};

[...]

ExtVector origin;
origin.m_x = 0;
origin.m_y = 0;
origin.m_xAnchor = TMD_POSITION_ANCHOR_LEFT;
origin.m_yAnchor = TMD_POSITION_ANCHOR_TOP;

is saved to:

<origin x="0" y="0" xAnchor="LEFT" yAnchor="TOP" />

 

One year without posts but still alive. Update and news

I just realized that I didn't post anything in this website for more than one year. Anyway, this is not actually the mirror of my activity so far. As always, I was very busy with my actual job but I found anyway some time to continue my private projects, in particular the TextureMind Framework. I really need to write some post here in the future for updating the situation. I perfected the serialization and the graphics context, making an implementation with the cairo library and the first example test.
Now the context is able to draw complex structure of primitive 2D shapes along materials and textures. I'm writing also an Engine that will be part of the framework and it will have great features. I implemented most of the architecture for 2D and 3D scenes, like textures, materials, shaders, assets, scripts, animations. I refactored my old material format for covering modern features, I got inspired by Unreal Engine 4, with some improvements.

Textures can be created starting from images, but also with fixed shaders and program shaders. A fixed shader texture depends on the context implementation and it's not programmable, but it should produce consistently the same output between different implementations, while a program shader texture can be programmed in GLSL or HLSL languages.

As you can see in the picture, the rounded box with holes is drawn with cairo libraries using a linear pattern, that in my framework is considered a 2D texture with a fixed shader. The same texture can be drawn also with OpenGL, DirectX or Vulkan implementation of the context. In this case, the fixed shader is translated into GLSL or HLSL code and executed by the graphics library.

This abstraction has been introduced to support basic functionality when advanced graphics libraries are not available. With Cairo we don't have program shaders and linear or radial shader textures can be translated into linear and radial patterns to make the rendering possible. On the contrary, program shader textures cannot be rendered because Cairo don't have program shader functionality at all. This abstraction is useful if you want to reuse the same context API for basic functionality, like software rendering of the GUI for the application.

Materials are more complex but they use similar concepts for being extremely scalable. First of all, I decided to abandon the old format similar to 3D Studio Max or Maxon Cinema 4D to adopt another format more similar to UE4 that is based on Visual Expression Nodes, where one node in this case is called "material component". A material is composed by different stages: displacement, fragment, blend and radiance. Every stage has parameters and a single component in input, that can be a texture with texture coords, diffusion with lights and normals or the combination of more components with "add" or "multiply" nodes.

If program shaders are supported by the context implementation, the material is translated into a program shader, otherwise it will be rendered as best as possible, with the component types supported by the graphics library. In the case of cairo, program shaders are not available and only texture components are supported. In this case, a single texture component is passed to fragment stage as input, like in the following diagram:

To draw a simple image with Cairo like the background in the example test, a material can be created with just a texture image attached to the fragment stage. If a feature is not available in the context implementation, the rendering won't be produced but no error will be generated. You can try to render a 3D scene with Cairo (and not OpenGL): in this case only 2D shapes will be rendered and not meshes with polygons,  complex materials and program shaders, that are not supported. On the contrary, advanced graphics libraries like OpenGL are always able to render scenes with lower features. An OpenGL context should be always able to render a simple 2D scene, like the one showed in the example. In the same way, the GUI can be rendered via software with Cairo libraries or via GPU with OpenGL libraries. However, advanced functionality are not mandatory for the graphics engine to work. In this way, the engine can be scaled from the pac-man clone to the latest 3D games with ray tracing functionality.

Now I'm proceeding with the implementation of the graphics engine and I'm pretty excited. The next step is to write an implementation of the graphics context with OpenGL, Direct3D and Vulkan, improving the 3D engine. Most of the 2D engine will be used to implement the GUI. I will implement a converter for importing assets with assimp library. I want to write also a series of posts about the missing progresses that I made during development of the framework in the last year.

It’s hard to continue, but I will

I'm so busy with my actual job that sometimes even if I have free time for updating this web site I'm so tired that I don't have any strength for doing it and I prefer to rest doing something else. I understand that the lack of updates are making me losing all the views but this is something I can't avoid. I'm making this site alive all alone, I'm the programmer, the webmaster, the graphics artist, I have to do everything by myself, and I can do this only in my spare time, because the other part of the time (90% of it) is occupied by the job, private life and other stuff. But it won't be always like this, It's just a bad period. I have so many news to share with you. I'm continuing to program the TextureMind Framework, and finally I completed the core part made of strings, multi-threading, containers, objects, serialization. Now I'm focusing on the graphics part. I'm designing the architecture of a Graphics Context that is designed for making 2D and 3D graphics without knowing that underlying implementation, that could be made with Cairo, OpenGL or Vulkan libraries. The graphics context is natively optimized to minimize the CPU usage and for handling command buffers. It will be used also for drawing the GUI with different implementations and for rendering 3D scenes. I just finished the architecture of the graphics part and I'm pretty satisfied with my job. Now I have to use the Cairo library for drawing the 2D shapes that will be used to draw the GUI. It's not the first time that I implement a brand new GUI from scratch so I know exactly what I have to do. As always, the most important thing is the architecture. As the framework has serialization for primitive 2D shapes and a graphics engine for drawing them, most of the work is done. The difficult part is the creation of all the widgets and the events generated by them. It's important to create just the basic widgets that are useful to compose the full interface, for which: Form windows, Buttons, Radio buttons, Check boxes, Labels, Frames, Scrollbars, Tables, Toolbars, Listviews and Treeviews. Other complex widgets can be created composing other widgets. All the widgets will have support for skinning, animation, non-rectangular shapes and alpha blending. Another difficult part is to handle correctly resources like images and materials. After that, I will continue implementing the 3D part of the engine. The first version of it will make use of Vulkan, as most of the graphics context is studied for making a good use of it. My first target is to finally load a full 3D model with the asset import library and render it with my engine, like I already did in the past with a very old version of the framework. Reaching such a good level will make me able to produce a long series of applications with the full control of them, without heavy frameworks or any other third-party dependencies. This framework has been programmed by scratch in its very components, including strings and containers such as vector, list, map and multimap, so the executable size will be very small. Even the allocation of memory has a custom algoritm to save speed and memory fragmentation when frequent dynamic allocation of small arrays happens. In the future, I want to create the following applications, that will be free: a 3D game; a desktop screen video capture with realtime h264 compression for NVIDIA, AMD, Intel series; a program for video editing; a program for painting; a 3D modeller; a program for digital sculpting. So, as you can see, I'm still full of good ideas for the future, that I will develop through the years, no matter how hard it will be, no matter how time it will require. See next time, stay tuned!

Transaction Fee Bubble. Fees 100 times higher in 1 year. Unconfirmed Transactions. I suspect that Bitcoin will burst

Bitcoin is the first cryptocurrency ever made and the first fork of a cryptocurrency protocol. It was the first to introduce the concept of encryption and decentralization of money. Since it was created in 2009, the protocol is now getting obsolete. The fact that about 150'000 transactions are unconfirmed is the evidence. Bitcoin cannot handle an high traffic of users, because most of the people are starting to have transactions not mined (and stuck) for days or weeks. If PoW already showed its negatives with the high energy cost, now the evidence arrives from the quality of service, when the traffic involves more people and the fees attached to transactions increase to get higher priority.

If the traffic would increase even more, with the current level of difficulty the entire system is doomed to burst. There are other cryptocurrecies that solve most of these problems. Bitcoin is now stronger than other cryptocurrencies because it was the first and it has a larger market capital and a larger user base. However, it doesn't mean that its concept is better and that it will survive in the future. In my opinion, Bitcoin will burst when the system will not be able to handle all the transactions. We can see the first symptoms right away. Check this website:

https://blockchain.info/unconfirmed-transactions;

You can see with your eyes that the number of unconfirmed transactions is very high in this moment, and it will increase in the future with the increment of people making transactions. Most of the transactions are not mined because their fees are too low for the miners that are wasting computational power to get the best profit at the least expense. They wasted millions of dollars to create mining farms with powerful ASICs and graphics cards. They want to mine transactions with the higher fees to have a payback and make even more money. As nobody is interested in wasting an outstanding amount of power to mine transations that have lower fees than others, the transactions with lower fees are rejected until they will be eliminated and created again with higher fees, in an infinite loop of speculation that is creating a Transaction Fee Bubble. As you can see from the above chart, the transation fee average incremented from $0.27 to $27, 100 times in just one year. The fees are growing so fast that transactions will be not convenient anymore. People will migrate their money to less expensive currencies, where transactions will not get unconfirmed so easily. For instance, I'm started to do it right now because I had unconfirmed transactions and no decent way to accelerate them without paying mining pools or other fees. And If I'm doing it, I'm sure that in this moment there are other people that are making the same consideration. So, this is the reason why I believe that Bitcoin is doomed to burst in the future.

Continue reading

TextureMind Framework – Progress #1 – Serialization and log

I continued to program the TextureMind Framework and I'm pretty happy with the result. I wish that this framework will give me the chance to increment the production of my software and to save most of the time (because I don't have it). People told me many times to use already existing frameworks to produce my works, and I tried. Most of them are not suitable for what I want to do, or maybe they have issues with the licenses or simply I don't like them. I want to make something new and innovative, and I feel like I'm about to do it.

- Serialization

Let me say that the serialization is a master piece. You can program directly in C++ new classes with a very easy pattern, save and load all the data into four formats: raw (*.raw), interchangable binary (*.tmd), human readable xml (*.xml) and json (*.json).

Continue reading

The world of cryptocurrency and mining

I think that anyone has heard about cryptocurrency and bitcoin in the past, but I doubt that everybody knows the crazy world behind it. Everybody knows that bitcoin can be used instead of money to do online payments that cannot be traced. But when I talk to people, even if they work in the field of information technology, nobody seems to know the very basics of cryptocurrency, how it works, how much is a bitcoin, how you can buy it, what are altcoins, how you can convert them to money or to different altcoins, how new coins are introduced, the fact that bitcoin increased exponentially in the last year and that it will grow even more.

Every year in informatics is characterized by huge events. Youtube was born in 2005. Facebook started the story of social networks in 2006, followed by Twitter, Instagram and Google+. But 2017 is the year of cryptocurrency. When Bitcoin was launched in 2009, the first transaction was 100 bitcoins to buy a pizza, then other coins were introduced into the system and the value of a single coin improved a lot, even though you could buy bitcoins for way less than one dollar in 2009. Just to make you understand, in this moment, 1 bitcoin is $9500. If in the past you bought 100 bitcoins for 0.30 dollars, now you would have had $950,000, about one million of dollars. Imagine when bitcoins were used mostly to make illegal transactions, mostly in the world of deep web, and imagine that those people, unlike me, knew everything about the world of bitcoins since 2009 and now they are filty rich. But jokes apart, the news that bitcoin improved exponentially is recent and it is spreading all over the world. The latest bitcoin forecasts say that bitcoin should increase even more in the future, but the predictions may diverge. Somebody says that the bubble will explode soon while other people say that, on the base of the current trend, it should reach about $50,000 in 2020. So, even if bitcoin is the heaviest coin on the market of cryptocurrency and it's reaching its maximum saturation in terms of circulating coins, it should increase even more in the future. So bitcoin and cryptocurrency in general are becoming in the understanding of people a great investment to make huge quantity of money. You don't have to do a boring job every day of your life, the only thing that you need to do is: buy $30'000 of bitcoins, wait 2-3 years, make $150'000 of bitcoins: that's it. Another thing to make profits is to exchange from one currency to another, to take advantage from the market fluctuations. In this case, you don't even need that the value improves, but you can make money through wise choices. You could sell altcoins when the value is rapdly decreasing and buy them again when the value is high again, doing what is called "trading". Now that the news is spreading along its huge promises, everybody is buying bitcoins or altcoins to make profits from fluctuations and trading. But that's not all.

Continue reading

Youtube demonetization: Justice or injustice?

Sometimes I watched videos of youtubers that were very angry because youtube slowly demonetized their videos, making their lives harder. They point out very good arguments, but most of the time they want to save their interests. As a person totally out from this business, I was wondering myself if demonetization is good or bad, if youtube is seriously out of mind or if the company has its good reasons for doing it. Justice or injustice?

First of all, let me say that youtube demonetization is not something new. Everybody is talking about it since 2016 but it started on 2012, when youtube automatically demonetized videos with content that was unfriendly for the advertisers, even though youtube started a massive action of demonetization only in the last two years. Of course, most of the youtubers are against demonetization because they use youtube for making money, and not as a free form of expression, unlike 2005, when youtube was a platform that could be used for free cultural exchange of arts, ideas, facts, news, clips, or simply to broadcast yourself.

Continue reading

The world of information technology is always full of possibilities, if you don’t waste your time

I remember a period when I was depressed, it was in 2009. I was born in 1982 when Microsoft started its ascent. I lived the period of MSDOS, Microsoft and IBM as a kid, when everybody talked about informatics and money. The world of computer science was so prosperous and full of promises that I started to follow it when I was just a little kid and after that I discovered that it was one of my biggest passion.

My dream of that time was to become famous with my software, to produce something incredible and to sell it. I started to learn computer programming by myself when I was little and when I finished the high school, I wanted to start my own activity. In the meantime, Bill Gates became the richest man in the world. I lived the period of great promises when small teams could really make money starting from scratch, especially making video games (in the good old times of Doom, Quake and Id Software). However, I had to continue with university and I did not have any time or money to follow my dreams. I continued to program in parallel with everyday commitments with the hope of producing something new, but I directed my energies in the wrong direction. I continued to follow the absurd dream of creating competitive software without the resources of doing it, with the hope that something could change or a miracle could happen. Even if I was a good programmer, I did not have strong knowledges on how to complete a product and make it commercial, or how to start an activity.

Continue reading

TextureMind Framework – Work in progress

What is it?

TextureMind framework is a SDK to develop software with different programming languages on different platforms. The framework is composed by a set of classes to facilitate tasks that require the use of multithreading, vectors, lists, maps, multimaps, parsing, serialization, ipc, networking, graphics, computer vision. The framework will be also composed by a complete set of applications to create images, animations, GUIs and videogames. I'm creating this framework to facilitate the production of software in general. It has been coded by me from scratch and it can be seen as a collection of all the knowledge that I have in the field of computer programming. The framework is currently closed source and it will be used just for my personal creations.

Continue reading

Why a team of developers should never waste time programming a new operating system

It may be obvious to many of you, but I saw teams of amateur developers dreaming the perfect operating system, starting from the idea that the contemporary operating systems (like Unix or Windows) are still far from being perfect. In particular, I remember an italian news group that was frequented by more than one developer that wanted to create his brand new operating system starting from scratch, programming it just by himself, so they gave me the inspiration to write this article, maybe it could be useful to avoid that the same disaster will ever happen again to someone else. Even if you are the superman of computer programming, today you cannot pursue the impossible dream of creating your own operating system without hurting yourself for precise technical reasons. I don't want to discuss here about the difficulties related to the creation of a new file system, virtual memory, inter-process communication, multithreading and so on, because my example is easier and more solid than that. I want to assume that you already have programmed a working kernel for that kind of operating system, a "minimum" set of drivers to make it run on your pc and that you are ready to share it with the entire world. Well, even in these ideal conditions, the main problem is with the companies that currently are using Windows or Linux and that should invest money to drop their operating systems / applications to use your operating system, the same for the hardware vendors that should write the specific drivers, the software houses, curstomers, professionals, video gamers and so on. Today there are so many hardware devices that is almost impossible to achieve the same performance that already existing and most proven operating systems have achievied in so many years of existence. It's not a matter of programming skills, it's a matter of "temporal gap". Even if you are so good to achieve the perfection on a single machine, you cannot be able to obtain the same stability on the wide range of existing personal computers, tablets, smart phones, sbc and all the devices mounting all the existing peripherals, because you won't have the money, credibility, reputation, experience, employees, followers, curtomers to do it. The situation in the past was sligtly different, Windows was created to run mainly on x86 family of processors, but there were other operating systems (like Amiga OS) that were projected to run on 680x0 family of processors, so the idea of operating system was more embedded to the small set of hardware that the vendors had to sell. Today it's totally different. If you want to create a valid operating system, you have to cover all the existing hardware produced at least in the past 20 years, or even if your main target is a single device, you cannot surpass the existing operating systems because they are already optimized to work better on the same device in terms of performance and power consumption. In conclusion, if you are having the crazy idea of creating your own operating system, just forget it because you are wasting your time and the opportunity to produce something really useful. You will never produce even an ounce of what is required today to run a modern application on modern hardware, with the same degree of portability and support in terms of graphics / audio / peripherals, and even if you do it, there are already more stable operating systems that are doing the same thing exactly when you are having the bad idea of doing it.

Targeting OpenGL is not so easy, don’t get confused by the documentation

I want to create this post to clarify once and for all how the OpenGL extensions mechanism works and the correct proceedings to target OpenGL versions. I named this article in this way because OpenGL are generally bad documented (or difficult to understand) and OpenGL.org wiki makes the things worse. For example, several people got confused by this page:

https://www.opengl.org/wiki/OpenGL_Extension#Core_Extensions

Targeting OpenGL 2.1

These are useful extensions when targeting GL 2.1 hardware. Note that many of the above extensions are also available, if the hardware is still being supported. These represent non-hardware extensions introduced after 2.1, or hardware features not exposed by 2.1's API. Most 2.1 hardware that is still being supported by its maker will provide these, given recent drivers.

And this document:

https://www.opengl.org/registry/specs/ARB/map_buffer_range.txt

"New Procedures and Functions

void *MapBufferRange( enum target, intptr offset, sizeiptr length,
bitfield access );

void FlushMappedBufferRange( enum target, intptr offset, sizeiptr length );

Issues

(1) Why don't the new tokens and entry points in this extension have
"ARB" suffixes like other ARB extensions?

RESOLVED: Unlike a normal ARB extension, this is a strict subset of functionality already approved in OpenGL 3.0. This extension exists only to support that functionality on older hardware that cannot implement a full OpenGL 3.0 driver. Since there are no possible behavior changes between the ARB extension and core features, source code compatibility is improved by not using suffixes on the extension."

so the question is:

- GL_ARB_map_buffer_range is a core extension or not?

Continue reading

How to parse gl.xml and produce your own loading library

In the previous article I emphasized the importance of not having a third-party loading library like glew because OpenGL is too complex and unpredictible. For example, if you want to implement a videogame with an average graphics and a large audience of users, probably OpenGL 2.1 is enough. At this point, you may need to load only that part of the library and make the right check of the extensions or just use the functions that have been promoted to the core of the current version. Remember that an extension is not guaranteed to be present on that version of OpenGL if it's not a core feature and this kind of extensions has been introduced after 3.0 to maintain the forward compatibility.

For instance, it's useful to check the extension GL_ARB_vertex_buffer_object only on OpenGL 1.4 (in that case you may want to use glBindBufferARB instead of glBindBuffer) but not on superior versions because it has been promoted to the core from the version 1.5 onward. The same applies to other versions of the core and extensions. If you target OpenGL 2.1, you have to be sure that the extensions tipically used by 2.1 applications have not been promoted to the latest OpenGL 4.5 version and to check the extenions on previous versions of the library, making sure to use the appropriate vendor prefix, like ARB. Even if with glew you can make this kind of check before using the loaded functions, I don't recommend it because glewInit() is going to load also parts that you don't want to use and you run the risk to understimate the importance of checking the capabilities.

Anyway, reading the OpenGL spec and add manually the required extensions is a time expensive job that you may don't have the time to do. Recently, the Khronos group has released an xml file where there is a detailed description of the extensions and the functions for every version of the library, it is also used to generate the gl.h and the glext.h header files with a script in Python. In the same way, you can program a script that parses the gl.xml file to generate your own loading library, making the appropriate check of the extensions and including only the part that you really need to load on your project. You can find the gl.xml file here:

Continue reading

Why loading libraries are dangerous to develop OpenGL applications

OpenGL is not so easy to use. The API exposes thousand of functions that are grouped into extensions and core features that you have to check for every single display driver release or the 3D application may not work. Since OpenGL is a graphics library used to program cool gfx effects without a serious knowledge of the underlying display driver, a large range of developers is tempted to use it regardless of the technical problems. For example, the functions are loaded "automagically" by an external loading library (like glew) and they are used to produce the desired effect, pretending that they are available everywhere. Of course this is totally wrong because OpenGL is scattered into dozens of extensions and core features that are linked to the "target" version that you want to support. Loading libraries like glew are dangerous because they try to load all the available OpenGL functions implemented by the display driver without making a proper check, giving you the illusion that the problem doesn't exist. The main problem with this approach is that you cannot develop a good OpenGL application without taking the following decision:

- How much OpenGL versions and extensions I have to support?

From this choice you can define the graphics aspect of the application and how to scale it to support a large range of display drivers, including the physical hardware and the driver supported by the virtual machines. For example, VirtualBox with guest addictions uses chromium 1.9 that comes with OpenGL 2.1 and GLSL 1.20, so your application won't start if you programmed it using OpenGL 4.5, or even worse you won't start also on graphics cards that support maximum the version 4.4 (that is very recent). For this reason, it's necessary to have a full awareness of the OpenGL scalability principles that must be applied to start on most of the available graphics cards, reducing or improving the graphics quality on the base of the available version that you decided to target. With this level of awareness, you will realize that you don't need any kind of loading library to use OpenGL, but only a good check of the available features, that you can program by yourself. Moreover, libraries like glew are the worst because they are implemented to replace the official gl.h and glext.h header files with a custom version anchored to the OpenGL version supported by that particular glew version.

Continue reading

How the deprecated OpenGL matrix model works

Even if nowadays everybody seems to drop OpenGL methods when they are deprecated on the core profile, it doesn't mean that you don't need to use them in compatibity profile or that you don't want to know how they work. I searched on the web to find more information on how the old and deprecated OpenGL matrices are implemented and I didn't find anything (except tutorials on how to use them!). My doubt was mainly about the operations order, because I needed to make a C++ implementation of them, maintaining the same exact behavior. I used OpenGL matrices In the past without worrying about how they were implemented, I had a precise idea but now I have to be 100% sure. Even if we know how to implement operations between matrices, the row-column product doesn't have the commutative property so the internal implementation can make the difference. At the end, my question is:

- What is the matrix row-column order and how the product is implemented on OpenGL?

Tired of finding pages saying how they are useless and deprecated now, I had to check by myself the Mesa source code to find what I was searching for:

P = A * B;

P[0] = A[0] * B[0] + A[4] * B[1] + A[8] * B[2] + A[12] * B[3];
P[4] = A[0] * B[4] + A[4] * B[5] + A[8] * B[6] + A[12] * B[7];
P[8] = A[0] * B[8] + A[4] * B[9] + A[8] * B[10] + A[12] * B[11];
P[12] = A[0] * B[12] + A[4] * B[13] + A[8] * B[14] + A[12] * B[15];

P[1] = A[1] * B[0] + A[5] * B[1] + A[9] * B[2] + A[13] * B[3];
P[5] = A[1] * B[4] + A[5] * B[5] + A[9] * B[6] + A[13] * B[7];
P[9] = A[1] * B[8] + A[5] * B[9] + A[9] * B[10] + A[13] * B[11];
P[13] = A[1] * B[12] + A[5] * B[13] + A[9] * B[14] + A[13] * B[15];

P[2] = A[2] * B[0] + A[6] * B[1] + A[10] * B[2] + A[14] * B[3];
P[6] = A[2] * B[4] + A[6] * B[5] + A[10] * B[6] + A[14] * B[7];
P[10] = A[2] * B[8] + A[6] * B[9] + A[10] * B[10] + A[14] * B[11];
P[14] = A[2] * B[12] + A[6] * B[13] + A[10] * B[14] + A[14] * B[15];

P[3] = A[3] * B[0] + A[7] * B[1] + A[11] * B[2] + A[15] * B[3];
P[7] = A[3] * B[4] + A[7] * B[5] + A[11] * B[6] + A[15] * B[7];
P[11] = A[3] * B[8] + A[7] * B[9] + A[11] * B[10] + A[15] * B[11];
P[15] = A[3] * B[12] + A[7] * B[13] + A[11] * B[14] + A[15] * B[15];

where A and B are 4x4 matrices and P is the result of the product. As you can see, this snippet clarifies how rows and columns are internally ordered and how the product is implemented. In conclusion, the OpenGL methods to modify the current matrix are implemented by Mesa in this way:

Continue reading