Category Archives: Thoughts

More than tutorials or knowledge, this category includes personal thoughts about different arguments, like informatics, science, politics, story, arts or one of the other categories.

Why I decided to adopt Vulkan

I had to decide the graphics library for the 3D rendering part of my framework, and I decided to adopt Vulkan for a series of reasons. First of all, after years of experience with OpenGL library, I decided that the trend of adding extensions is not good for the maintenance of the software. You may say that there are many wrappers around and you don't have to care about the underlying library, but it's not a good practice at all.

Vulkan (API) - Wikipedia

You are going to add an external dependency on your software, you don't have control over it and as a programmer, you are not going to learn anything (apart the wrapper itself). I programmed several OpenGL applications without using any wrapper and it's the best thing to do. In this way, you learn how the library works and all the issues related to that. One of the worst is the huge quantity of extensions. If you are lucky, the OpenGL version that you are targetting already has all the extensions that you need to use, so you don't need to check them one by one for the functionality that you are supporting, but just the OpenGL version. However, the target version always depends on what you need to do. If you want to support OpenGL 4.5, you need to know if all the GPUs that you want to support have that particular version of OpenGL. One day you could find that OpenGL 4.5 is not supported on a particular hardware and that your engine is not so good from a scalability point of view. Falling back to older OpenGL versions may require an entire refactor of the engine, that is not so nice if you had to release for tomorrow. On the contrary, if you need to support older versions of OpenGL from 3.0 onward, you will experiment on your person how hellish could be to deal with the outstanding amount of extensions that the library has just to support the most basic things. And anyway, this is not even the main problem. Even if you drop the past, you cannot prevent the GPUs vendors and the Khronos group from releasing extensions in the future that will be required to have decent performance on modern hardware, because the entire OpenGL architecture is obsolete.
Continue reading

One year without posts but still alive. Update and news

I just realized that I didn't post anything in this website for more than one year. Anyway, this is not actually the mirror of my activity so far. As always, I was very busy with my actual job but I found anyway some time to continue my private projects, in particular the TextureMind Framework. I really need to write some post here in the future for updating the situation. I perfected the serialization and the graphics context, making an implementation with the cairo library and the first example test.
Now the context is able to draw complex structure of primitive 2D shapes along materials and textures. I'm writing also an Engine that will be part of the framework and it will have great features. I implemented most of the architecture for 2D and 3D scenes, like textures, materials, shaders, assets, scripts, animations. I refactored my old material format for covering modern features, I got inspired by Unreal Engine 4, with some improvements.

Textures can be created starting from images, but also with fixed shaders and program shaders. A fixed shader texture depends on the context implementation and it's not programmable, but it should produce consistently the same output between different implementations, while a program shader texture can be programmed in GLSL or HLSL languages.

As you can see in the picture, the rounded box with holes is drawn with cairo libraries using a linear pattern, that in my framework is considered a 2D texture with a fixed shader. The same texture can be drawn also with OpenGL, DirectX or Vulkan implementation of the context. In this case, the fixed shader is translated into GLSL or HLSL code and executed by the graphics library.

This abstraction has been introduced to support basic functionality when advanced graphics libraries are not available. With Cairo we don't have program shaders and linear or radial shader textures can be translated into linear and radial patterns to make the rendering possible. On the contrary, program shader textures cannot be rendered because Cairo don't have program shader functionality at all. This abstraction is useful if you want to reuse the same context API for basic functionality, like software rendering of the GUI for the application.

Materials are more complex but they use similar concepts for being extremely scalable. First of all, I decided to abandon the old format similar to 3D Studio Max or Maxon Cinema 4D to adopt another format more similar to UE4 that is based on Visual Expression Nodes, where one node in this case is called "material component". A material is composed by different stages: displacement, fragment, blend and radiance. Every stage has parameters and a single component in input, that can be a texture with texture coords, diffusion with lights and normals or the combination of more components with "add" or "multiply" nodes.

If program shaders are supported by the context implementation, the material is translated into a program shader, otherwise it will be rendered as best as possible, with the component types supported by the graphics library. In the case of cairo, program shaders are not available and only texture components are supported. In this case, a single texture component is passed to fragment stage as input, like in the following diagram:

To draw a simple image with Cairo like the background in the example test, a material can be created with just a texture image attached to the fragment stage. If a feature is not available in the context implementation, the rendering won't be produced but no error will be generated. You can try to render a 3D scene with Cairo (and not OpenGL): in this case only 2D shapes will be rendered and not meshes with polygons,  complex materials and program shaders, that are not supported. On the contrary, advanced graphics libraries like OpenGL are always able to render scenes with lower features. An OpenGL context should be always able to render a simple 2D scene, like the one showed in the example. In the same way, the GUI can be rendered via software with Cairo libraries or via GPU with OpenGL libraries. However, advanced functionality are not mandatory for the graphics engine to work. In this way, the engine can be scaled from the pac-man clone to the latest 3D games with ray tracing functionality.

Now I'm proceeding with the implementation of the graphics engine and I'm pretty excited. The next step is to write an implementation of the graphics context with OpenGL, Direct3D and Vulkan, improving the 3D engine. Most of the 2D engine will be used to implement the GUI. I will implement a converter for importing assets with assimp library. I want to write also a series of posts about the missing progresses that I made during development of the framework in the last year.

It’s hard to continue, but I will

I'm so busy with my actual job that sometimes even if I have free time for updating this web site I'm so tired that I don't have any strength for doing it and I prefer to rest doing something else. I understand that the lack of updates are making me losing all the views but this is something I can't avoid. I'm making this site alive all alone, I'm the programmer, the webmaster, the graphics artist, I have to do everything by myself, and I can do this only in my spare time, because the other part of the time (90% of it) is occupied by the job, private life and other stuff. But it won't be always like this, It's just a bad period. I have so many news to share with you. I'm continuing to program the TextureMind Framework, and finally I completed the core part made of strings, multi-threading, containers, objects, serialization. Now I'm focusing on the graphics part. I'm designing the architecture of a Graphics Context that is designed for making 2D and 3D graphics without knowing that underlying implementation, that could be made with Cairo, OpenGL or Vulkan libraries. The graphics context is natively optimized to minimize the CPU usage and for handling command buffers. It will be used also for drawing the GUI with different implementations and for rendering 3D scenes. I just finished the architecture of the graphics part and I'm pretty satisfied with my job. Now I have to use the Cairo library for drawing the 2D shapes that will be used to draw the GUI. It's not the first time that I implement a brand new GUI from scratch so I know exactly what I have to do. As always, the most important thing is the architecture. As the framework has serialization for primitive 2D shapes and a graphics engine for drawing them, most of the work is done. The difficult part is the creation of all the widgets and the events generated by them. It's important to create just the basic widgets that are useful to compose the full interface, for which: Form windows, Buttons, Radio buttons, Check boxes, Labels, Frames, Scrollbars, Tables, Toolbars, Listviews and Treeviews. Other complex widgets can be created composing other widgets. All the widgets will have support for skinning, animation, non-rectangular shapes and alpha blending. Another difficult part is to handle correctly resources like images and materials. After that, I will continue implementing the 3D part of the engine. The first version of it will make use of Vulkan, as most of the graphics context is studied for making a good use of it. My first target is to finally load a full 3D model with the asset import library and render it with my engine, like I already did in the past with a very old version of the framework. Reaching such a good level will make me able to produce a long series of applications with the full control of them, without heavy frameworks or any other third-party dependencies. This framework has been programmed by scratch in its very components, including strings and containers such as vector, list, map and multimap, so the executable size will be very small. Even the allocation of memory has a custom algoritm to save speed and memory fragmentation when frequent dynamic allocation of small arrays happens. In the future, I want to create the following applications, that will be free: a 3D game; a desktop screen video capture with realtime h264 compression for NVIDIA, AMD, Intel series; a program for video editing; a program for painting; a 3D modeller; a program for digital sculpting. So, as you can see, I'm still full of good ideas for the future, that I will develop through the years, no matter how hard it will be, no matter how time it will require. See next time, stay tuned!

Transaction Fee Bubble. Fees 100 times higher in 1 year. Unconfirmed Transactions. I suspect that Bitcoin will burst

Bitcoin is the first cryptocurrency ever made and the first fork of a cryptocurrency protocol. It was the first to introduce the concept of encryption and decentralization of money. Since it was created in 2009, the protocol is now getting obsolete. The fact that about 150'000 transactions are unconfirmed is the evidence. Bitcoin cannot handle an high traffic of users, because most of the people are starting to have transactions not mined (and stuck) for days or weeks. If PoW already showed its negatives with the high energy cost, now the evidence arrives from the quality of service, when the traffic involves more people and the fees attached to transactions increase to get higher priority.

If the traffic would increase even more, with the current level of difficulty the entire system is doomed to burst. There are other cryptocurrecies that solve most of these problems. Bitcoin is now stronger than other cryptocurrencies because it was the first and it has a larger market capital and a larger user base. However, it doesn't mean that its concept is better and that it will survive in the future. In my opinion, Bitcoin will burst when the system will not be able to handle all the transactions. We can see the first symptoms right away. Check this website:

https://blockchain.info/unconfirmed-transactions;

You can see with your eyes that the number of unconfirmed transactions is very high in this moment, and it will increase in the future with the increment of people making transactions. Most of the transactions are not mined because their fees are too low for the miners that are wasting computational power to get the best profit at the least expense. They wasted millions of dollars to create mining farms with powerful ASICs and graphics cards. They want to mine transactions with the higher fees to have a payback and make even more money. As nobody is interested in wasting an outstanding amount of power to mine transations that have lower fees than others, the transactions with lower fees are rejected until they will be eliminated and created again with higher fees, in an infinite loop of speculation that is creating a Transaction Fee Bubble. As you can see from the above chart, the transation fee average incremented from $0.27 to $27, 100 times in just one year. The fees are growing so fast that transactions will be not convenient anymore. People will migrate their money to less expensive currencies, where transactions will not get unconfirmed so easily. For instance, I'm started to do it right now because I had unconfirmed transactions and no decent way to accelerate them without paying mining pools or other fees. And If I'm doing it, I'm sure that in this moment there are other people that are making the same consideration. So, this is the reason why I believe that Bitcoin is doomed to burst in the future.

Continue reading

Youtube demonetization: Justice or injustice?

Sometimes I watched videos of youtubers that were very angry because youtube slowly demonetized their videos, making their lives harder. They point out very good arguments, but most of the time they want to save their interests. As a person totally out from this business, I was wondering myself if demonetization is good or bad, if youtube is seriously out of mind or if the company has its good reasons for doing it. Justice or injustice?

First of all, let me say that youtube demonetization is not something new. Everybody is talking about it since 2016 but it started on 2012, when youtube automatically demonetized videos with content that was unfriendly for the advertisers, even though youtube started a massive action of demonetization only in the last two years. Of course, most of the youtubers are against demonetization because they use youtube for making money, and not as a free form of expression, unlike 2005, when youtube was a platform that could be used for free cultural exchange of arts, ideas, facts, news, clips, or simply to broadcast yourself.

Continue reading

The world of information technology is always full of possibilities, if you don’t waste your time

I remember a period when I was depressed, it was in 2009. I was born in 1982 when Microsoft started its ascent. I lived the period of MSDOS, Microsoft and IBM as a kid, when everybody talked about informatics and money. The world of computer science was so prosperous and full of promises that I started to follow it when I was just a little kid and after that I discovered that it was one of my biggest passion.

My dream of that time was to become famous with my software, to produce something incredible and to sell it. I started to learn computer programming by myself when I was little and when I finished the high school, I wanted to start my own activity. In the meantime, Bill Gates became the richest man in the world. I lived the period of great promises when small teams could really make money starting from scratch, especially making video games (in the good old times of Doom, Quake and Id Software). However, I had to continue with university and I did not have any time or money to follow my dreams. I continued to program in parallel with everyday commitments with the hope of producing something new, but I directed my energies in the wrong direction. I continued to follow the absurd dream of creating competitive software without the resources of doing it, with the hope that something could change or a miracle could happen. Even if I was a good programmer, I did not have strong knowledges on how to complete a product and make it commercial, or how to start an activity.

Continue reading

Why a team of developers should never waste time programming a new operating system

It may be obvious to many of you, but I saw teams of amateur developers dreaming the perfect operating system, starting from the idea that the contemporary operating systems (like Unix or Windows) are still far from being perfect. In particular, I remember an italian news group that was frequented by more than one developer that wanted to create his brand new operating system starting from scratch, programming it just by himself, so they gave me the inspiration to write this article, maybe it could be useful to avoid that the same disaster will ever happen again to someone else. Even if you are the superman of computer programming, today you cannot pursue the impossible dream of creating your own operating system without hurting yourself for precise technical reasons. I don't want to discuss here about the difficulties related to the creation of a new file system, virtual memory, inter-process communication, multithreading and so on, because my example is easier and more solid than that. I want to assume that you already have programmed a working kernel for that kind of operating system, a "minimum" set of drivers to make it run on your pc and that you are ready to share it with the entire world. Well, even in these ideal conditions, the main problem is with the companies that currently are using Windows or Linux and that should invest money to drop their operating systems / applications to use your operating system, the same for the hardware vendors that should write the specific drivers, the software houses, curstomers, professionals, video gamers and so on. Today there are so many hardware devices that is almost impossible to achieve the same performance that already existing and most proven operating systems have achievied in so many years of existence. It's not a matter of programming skills, it's a matter of "temporal gap". Even if you are so good to achieve the perfection on a single machine, you cannot be able to obtain the same stability on the wide range of existing personal computers, tablets, smart phones, sbc and all the devices mounting all the existing peripherals, because you won't have the money, credibility, reputation, experience, employees, followers, curtomers to do it. The situation in the past was sligtly different, Windows was created to run mainly on x86 family of processors, but there were other operating systems (like Amiga OS) that were projected to run on 680x0 family of processors, so the idea of operating system was more embedded to the small set of hardware that the vendors had to sell. Today it's totally different. If you want to create a valid operating system, you have to cover all the existing hardware produced at least in the past 20 years, or even if your main target is a single device, you cannot surpass the existing operating systems because they are already optimized to work better on the same device in terms of performance and power consumption. In conclusion, if you are having the crazy idea of creating your own operating system, just forget it because you are wasting your time and the opportunity to produce something really useful. You will never produce even an ounce of what is required today to run a modern application on modern hardware, with the same degree of portability and support in terms of graphics / audio / peripherals, and even if you do it, there are already more stable operating systems that are doing the same thing exactly when you are having the bad idea of doing it.

Why loading libraries are dangerous to develop OpenGL applications

OpenGL is not so easy to use. The API exposes thousand of functions that are grouped into extensions and core features that you have to check for every single display driver release or the 3D application may not work. Since OpenGL is a graphics library used to program cool gfx effects without a serious knowledge of the underlying display driver, a large range of developers is tempted to use it regardless of the technical problems. For example, the functions are loaded "automagically" by an external loading library (like glew) and they are used to produce the desired effect, pretending that they are available everywhere. Of course this is totally wrong because OpenGL is scattered into dozens of extensions and core features that are linked to the "target" version that you want to support. Loading libraries like glew are dangerous because they try to load all the available OpenGL functions implemented by the display driver without making a proper check, giving you the illusion that the problem doesn't exist. The main problem with this approach is that you cannot develop a good OpenGL application without taking the following decision:

- How much OpenGL versions and extensions I have to support?

From this choice you can define the graphics aspect of the application and how to scale it to support a large range of display drivers, including the physical hardware and the driver supported by the virtual machines. For example, VirtualBox with guest addictions uses chromium 1.9 that comes with OpenGL 2.1 and GLSL 1.20, so your application won't start if you programmed it using OpenGL 4.5, or even worse you won't start also on graphics cards that support maximum the version 4.4 (that is very recent). For this reason, it's necessary to have a full awareness of the OpenGL scalability principles that must be applied to start on most of the available graphics cards, reducing or improving the graphics quality on the base of the available version that you decided to target. With this level of awareness, you will realize that you don't need any kind of loading library to use OpenGL, but only a good check of the available features, that you can program by yourself. Moreover, libraries like glew are the worst because they are implemented to replace the official gl.h and glext.h header files with a custom version anchored to the OpenGL version supported by that particular glew version.

Continue reading

Nuovo sito

Salve a tutti. Benvenuti nella mia nuova pagina personale. Mi sto trasferendo definitivamente dal mio vecchio sito TSRevolution (www.tsrevolution.com), dato che faceva parte di un progetto del passato che non ho più intenzione di continuare. Ho pensato di impostare questo nuovo sito come un blog, tuttavia non sarà come quelle pagine che si usano adesso dove le persone scrivono un diario delle loro faccende personali, ma cercherò di mantenermi sempre sul tema della programmazione di computer, rilasciando sorgenti, progetti e articoli su alcuni argomenti di computer grafica. Sto cercando qualche sito (attinente all'argomento) da linkare, quindi se siete programmatori, avete una pagina e siete interessati potete contattarmi all'indirizzo email gingegneri82@hotmail.it, oppure lasciare un commento a questo post. Uno speciale ringraziamento è rivolto agli autori del WordPress e all'artista che ha progettato l'aspetto grafico di questa pagina.

Buona navigazione!