I'm so busy with my actual job that sometimes even if I have free time for updating this web site I'm so tired that I don't have any strength for doing it and I prefer to rest doing something else. I understand that the lack of updates are making me losing all the views but this is something I can't avoid. I'm making this site alive all alone, I'm the programmer, the webmaster, the graphics artist, I have to do everything by myself, and I can do this only in my spare time, because the other part of the time (90% of it) is occupied by the job, private life and other stuff. But it won't be always like this, It's just a bad period. I have so many news to share with you. I'm continuing to program the TextureMind Framework, and finally I completed the core part made of strings, multi-threading, containers, objects, serialization. Now I'm focusing on the graphics part. I'm designing the architecture of a Graphics Context that is designed for making 2D and 3D graphics without knowing that underlying implementation, that could be made with Cairo, OpenGL or Vulkan libraries. The graphics context is natively optimized to minimize the CPU usage and for handling command buffers. It will be used also for drawing the GUI with different implementations and for rendering 3D scenes. I just finished the architecture of the graphics part and I'm pretty satisfied with my job. Now I have to use the Cairo library for drawing the 2D shapes that will be used to draw the GUI. It's not the first time that I implement a brand new GUI from scratch so I know exactly what I have to do. As always, the most important thing is the architecture. As the framework has serialization for primitive 2D shapes and a graphics engine for drawing them, most of the work is done. The difficult part is the creation of all the widgets and the events generated by them. It's important to create just the basic widgets that are useful to compose the full interface, for which: Form windows, Buttons, Radio buttons, Check boxes, Labels, Frames, Scrollbars, Tables, Toolbars, Listviews and Treeviews. Other complex widgets can be created composing other widgets. All the widgets will have support for skinning, animation, non-rectangular shapes and alpha blending. Another difficult part is to handle correctly resources like images and materials. After that, I will continue implementing the 3D part of the engine. The first version of it will make use of Vulkan, as most of the graphics context is studied for making a good use of it. My first target is to finally load a full 3D model with the asset import library and render it with my engine, like I already did in the past with a very old version of the framework. Reaching such a good level will make me able to produce a long series of applications with the full control of them, without heavy frameworks or any other third-party dependencies. This framework has been programmed by scratch in its very components, including strings and containers such as vector, list, map and multimap, so the executable size will be very small. Even the allocation of memory has a custom algoritm to save speed and memory fragmentation when frequent dynamic allocation of small arrays happens. In the future, I want to create the following applications, that will be free: a 3D game; a desktop screen video capture with realtime h264 compression for NVIDIA, AMD, Intel series; a program for video editing; a program for painting; a 3D modeller; a program for digital sculpting. So, as you can see, I'm still full of good ideas for the future, that I will develop through the years, no matter how hard it will be, no matter how time it will require. See next time, stay tuned!
Bitcoin is the first cryptocurrency ever made and the first fork of a cryptocurrency protocol. It was the first to introduce the concept of encryption and decentralization of money. Since it was created in 2009, the protocol is now getting obsolete. The fact that about 150'000 transactions are unconfirmed is the evidence. Bitcoin cannot handle an high traffic of users, because most of the people are starting to have transactions not mined (and stuck) for days or weeks. If PoW already showed its negatives with the high energy cost, now the evidence arrives from the quality of service, when the traffic involves more people and the fees attached to transactions increase to get higher priority.
If the traffic would increase even more, with the current level of difficulty the entire system is doomed to burst. There are other cryptocurrecies that solve most of these problems. Bitcoin is now stronger than other cryptocurrencies because it was the first and it has a larger market capital and a larger user base. However, it doesn't mean that its concept is better and that it will survive in the future. In my opinion, Bitcoin will burst when the system will not be able to handle all the transactions. We can see the first symptoms right away. Check this website:
You can see with your eyes that the number of unconfirmed transactions is very high in this moment, and it will increase in the future with the increment of people making transactions. Most of the transactions are not mined because their fees are too low for the miners that are wasting computational power to get the best profit at the least expense. They wasted millions of dollars to create mining farms with powerful ASICs and graphics cards. They want to mine transactions with the higher fees to have a payback and make even more money. As nobody is interested in wasting an outstanding amount of power to mine transations that have lower fees than others, the transactions with lower fees are rejected until they will be eliminated and created again with higher fees, in an infinite loop of speculation that is creating a Transaction Fee Bubble. As you can see from the above chart, the transation fee average incremented from $0.27 to $27, 100 times in just one year. The fees are growing so fast that transactions will be not convenient anymore. People will migrate their money to less expensive currencies, where transactions will not get unconfirmed so easily. For instance, I'm started to do it right now because I had unconfirmed transactions and no decent way to accelerate them without paying mining pools or other fees. And If I'm doing it, I'm sure that in this moment there are other people that are making the same consideration. So, this is the reason why I believe that Bitcoin is doomed to burst in the future.
I continued to program the TextureMind Framework and I'm pretty happy with the result. I wish that this framework will give me the chance to increment the production of my software and to save most of the time (because I don't have it). People told me many times to use already existing frameworks to produce my works, and I tried. Most of them are not suitable for what I want to do, or maybe they have issues with the licenses or simply I don't like them. I want to make something new and innovative, and I feel like I'm about to do it.
Let me say that the serialization is a master piece. You can program directly in C++ new classes with a very easy pattern, save and load all the data into four formats: raw (*.raw), interchangable binary (*.tmd), human readable xml (*.xml) and json (*.json).
I think that anyone has heard about cryptocurrency and bitcoin in the past, but I doubt that everybody knows the crazy world behind it. Everybody knows that bitcoin can be used instead of money to do online payments that cannot be traced. But when I talk to people, even if they work in the field of information technology, nobody seems to know the very basics of cryptocurrency, how it works, how much is a bitcoin, how you can buy it, what are altcoins, how you can convert them to money or to different altcoins, how new coins are introduced, the fact that bitcoin increased exponentially in the last year and that it will grow even more.
Every year in informatics is characterized by huge events. Youtube was born in 2005. Facebook started the story of social networks in 2006, followed by Twitter, Instagram and Google+. But 2017 is the year of cryptocurrency. When Bitcoin was launched in 2009, the first transaction was 100 bitcoins to buy a pizza, then other coins were introduced into the system and the value of a single coin improved a lot, even though you could buy bitcoins for way less than one dollar in 2009. Just to make you understand, in this moment, 1 bitcoin is $9500. If in the past you bought 100 bitcoins for 0.30 dollars, now you would have had $950,000, about one million of dollars. Imagine when bitcoins were used mostly to make illegal transactions, mostly in the world of deep web, and imagine that those people, unlike me, knew everything about the world of bitcoins since 2009 and now they are filty rich. But jokes apart, the news that bitcoin improved exponentially is recent and it is spreading all over the world. The latest bitcoin forecasts say that bitcoin should increase even more in the future, but the predictions may diverge. Somebody says that the bubble will explode soon while other people say that, on the base of the current trend, it should reach about $50,000 in 2020. So, even if bitcoin is the heaviest coin on the market of cryptocurrency and it's reaching its maximum saturation in terms of circulating coins, it should increase even more in the future. So bitcoin and cryptocurrency in general are becoming in the understanding of people a great investment to make huge quantity of money. You don't have to do a boring job every day of your life, the only thing that you need to do is: buy $30'000 of bitcoins, wait 2-3 years, make $150'000 of bitcoins: that's it. Another thing to make profits is to exchange from one currency to another, to take advantage from the market fluctuations. In this case, you don't even need that the value improves, but you can make money through wise choices. You could sell altcoins when the value is rapdly decreasing and buy them again when the value is high again, doing what is called "trading". Now that the news is spreading along its huge promises, everybody is buying bitcoins or altcoins to make profits from fluctuations and trading. But that's not all.
Sometimes I watched videos of youtubers that were very angry because youtube slowly demonetized their videos, making their lives harder. They point out very good arguments, but most of the time they want to save their interests. As a person totally out from this business, I was wondering myself if demonetization is good or bad, if youtube is seriously out of mind or if the company has its good reasons for doing it. Justice or injustice?
First of all, let me say that youtube demonetization is not something new. Everybody is talking about it since 2016 but it started on 2012, when youtube automatically demonetized videos with content that was unfriendly for the advertisers, even though youtube started a massive action of demonetization only in the last two years. Of course, most of the youtubers are against demonetization because they use youtube for making money, and not as a free form of expression, unlike 2005, when youtube was a platform that could be used for free cultural exchange of arts, ideas, facts, news, clips, or simply to broadcast yourself.
I remember a period when I was depressed, it was in 2009. I was born in 1982 when Microsoft started its ascent. I lived the period of MSDOS, Microsoft and IBM as a kid, when everybody talked about informatics and money. The world of computer science was so prosperous and full of promises that I started to follow it when I was just a little kid and after that I discovered that it was one of my biggest passion.
My dream of that time was to become famous with my software, to produce something incredible and to sell it. I started to learn computer programming by myself when I was little and when I finished the high school, I wanted to start my own activity. In the meantime, Bill Gates became the richest man in the world. I lived the period of great promises when small teams could really make money starting from scratch, especially making video games (in the good old times of Doom, Quake and Id Software). However, I had to continue with university and I did not have any time or money to follow my dreams. I continued to program in parallel with everyday commitments with the hope of producing something new, but I directed my energies in the wrong direction. I continued to follow the absurd dream of creating competitive software without the resources of doing it, with the hope that something could change or a miracle could happen. Even if I was a good programmer, I did not have strong knowledges on how to complete a product and make it commercial, or how to start an activity.
What is it?
TextureMind framework is a framework written in C++ language to develop a wide range of cross-platform applications. The framework is composed by a set of classes to facilitate multi-threading, serialization, ipc, networking, graphics and computer vision. The framework is also composed by a complete set of applications to create images, animations, GUIs and videogames. I'm creating this framework to speed-up the production of software in general. It has been coded by me from scratch and it can be seen as a collection of all the knowledge that I have in the field of computer programming. The framework is currently closed source and it will be used just for my personal creations.
It may be obvious to many of you, but I saw teams of amateur developers dreaming the perfect operating system, starting from the idea that the contemporary operating systems (like Unix or Windows) are still far from being perfect. In particular, I remember an italian news group that was frequented by more than one developer that wanted to create his brand new operating system starting from scratch, programming it just by himself, so they gave me the inspiration to write this article, maybe it could be useful to avoid that the same disaster will ever happen again to someone else. Even if you are the superman of computer programming, today you cannot pursue the impossible dream of creating your own operating system without hurting yourself for precise technical reasons. I don't want to discuss here about the difficulties related to the creation of a new file system, virtual memory, inter-process communication, multithreading and so on, because my example is easier and more solid than that. I want to assume that you already have programmed a working kernel for that kind of operating system, a "minimum" set of drivers to make it run on your pc and that you are ready to share it with the entire world. Well, even in these ideal conditions, the main problem is with the companies that currently are using Windows or Linux and that should invest money to drop their operating systems / applications to use your operating system, the same for the hardware vendors that should write the specific drivers, the software houses, curstomers, professionals, video gamers and so on. Today there are so many hardware devices that is almost impossible to achieve the same performance that already existing and most proven operating systems have achievied in so many years of existence. It's not a matter of programming skills, it's a matter of "temporal gap". Even if you are so good to achieve the perfection on a single machine, you cannot be able to obtain the same stability on the wide range of existing personal computers, tablets, smart phones, sbc and all the devices mounting all the existing peripherals, because you won't have the money, credibility, reputation, experience, employees, followers, curtomers to do it. The situation in the past was sligtly different, Windows was created to run mainly on x86 family of processors, but there were other operating systems (like Amiga OS) that were projected to run on 680x0 family of processors, so the idea of operating system was more embedded to the small set of hardware that the vendors had to sell. Today it's totally different. If you want to create a valid operating system, you have to cover all the existing hardware produced at least in the past 20 years, or even if your main target is a single device, you cannot surpass the existing operating systems because they are already optimized to work better on the same device in terms of performance and power consumption. In conclusion, if you are having the crazy idea of creating your own operating system, just forget it because you are wasting your time and the opportunity to produce something really useful. You will never produce even an ounce of what is required today to run a modern application on modern hardware, with the same degree of portability and support in terms of graphics / audio / peripherals, and even if you do it, there are already more stable operating systems that are doing the same thing exactly when you are having the bad idea of doing it.
I want to create this post to clarify once and for all how the OpenGL extensions mechanism works and the correct proceedings to target OpenGL versions. I named this article in this way because OpenGL are generally bad documented (or difficult to understand) and OpenGL.org wiki makes the things worse. For example, several people got confused by this page:
" Targeting OpenGL 2.1
These are useful extensions when targeting GL 2.1 hardware. Note that many of the above extensions are also available, if the hardware is still being supported. These represent non-hardware extensions introduced after 2.1, or hardware features not exposed by 2.1's API. Most 2.1 hardware that is still being supported by its maker will provide these, given recent drivers.
- Most of the previous list
- GL_ARB_map_buffer_range [...]"
And this document:
"New Procedures and Functions
void *MapBufferRange( enum target, intptr offset, sizeiptr length,
bitfield access );
void FlushMappedBufferRange( enum target, intptr offset, sizeiptr length );
(1) Why don't the new tokens and entry points in this extension have
"ARB" suffixes like other ARB extensions?
RESOLVED: Unlike a normal ARB extension, this is a strict subset of functionality already approved in OpenGL 3.0. This extension exists only to support that functionality on older hardware that cannot implement a full OpenGL 3.0 driver. Since there are no possible behavior changes between the ARB extension and core features, source code compatibility is improved by not using suffixes on the extension."
so the question is:
- GL_ARB_map_buffer_range is a core extension or not?
In the previous article I emphasized the importance of not having a third-party loading library like glew because OpenGL is too complex and unpredictible. For example, if you want to implement a videogame with an average graphics and a large audience of users, probably OpenGL 2.1 is enough. At this point, you may need to load only that part of the library and make the right check of the extensions or just use the functions that have been promoted to the core of the current version. Remember that an extension is not guaranteed to be present on that version of OpenGL if it's not a core feature and this kind of extensions has been introduced after 3.0 to maintain the forward compatibility.
For instance, it's useful to check the extension GL_ARB_vertex_buffer_object only on OpenGL 1.4 (in that case you may want to use glBindBufferARB instead of glBindBuffer) but not on superior versions because it has been promoted to the core from the version 1.5 onward. The same applies to other versions of the core and extensions. If you target OpenGL 2.1, you have to be sure that the extensions tipically used by 2.1 applications have not been promoted to the latest OpenGL 4.5 version and to check the extenions on previous versions of the library, making sure to use the appropriate vendor prefix, like ARB. Even if with glew you can make this kind of check before using the loaded functions, I don't recommend it because glewInit() is going to load also parts that you don't want to use and you run the risk to understimate the importance of checking the capabilities.
Anyway, reading the OpenGL spec and add manually the required extensions is a time expensive job that you may don't have the time to do. Recently, the Khronos group has released an xml file where there is a detailed description of the extensions and the functions for every version of the library, it is also used to generate the gl.h and the glext.h header files with a script in Python. In the same way, you can program a script that parses the gl.xml file to generate your own loading library, making the appropriate check of the extensions and including only the part that you really need to load on your project. You can find the gl.xml file here:
OpenGL is not so easy to use. The API exposes thousand of functions that are grouped into extensions and core features that you have to check for every single display driver release or the 3D application may not work. Since OpenGL is a graphics library used to program cool gfx effects without a serious knowledge of the underlying display driver, a large range of developers is tempted to use it regardless of the technical problems. For example, the functions are loaded "automagically" by an external loading library (like glew) and they are used to produce the desired effect, pretending that they are available everywhere. Of course this is totally wrong because OpenGL is scattered into dozens of extensions and core features that are linked to the "target" version that you want to support. Loading libraries like glew are dangerous because they try to load all the available OpenGL functions implemented by the display driver without making a proper check, giving you the illusion that the problem doesn't exist. The main problem with this approach is that you cannot develop a good OpenGL application without taking the following decision:
- How much OpenGL versions and extensions I have to support?
From this choice you can define the graphics aspect of the application and how to scale it to support a large range of display drivers, including the physical hardware and the driver supported by the virtual machines. For example, VirtualBox with guest addictions uses chromium 1.9 that comes with OpenGL 2.1 and GLSL 1.20, so your application won't start if you programmed it using OpenGL 4.5, or even worse you won't start also on graphics cards that support maximum the version 4.4 (that is very recent). For this reason, it's necessary to have a full awareness of the OpenGL scalability principles that must be applied to start on most of the available graphics cards, reducing or improving the graphics quality on the base of the available version that you decided to target. With this level of awareness, you will realize that you don't need any kind of loading library to use OpenGL, but only a good check of the available features, that you can program by yourself. Moreover, libraries like glew are the worst because they are implemented to replace the official gl.h and glext.h header files with a custom version anchored to the OpenGL version supported by that particular glew version.
Even if nowadays everybody seems to drop OpenGL methods when they are deprecated on the core profile, it doesn't mean that you don't need to use them in compatibity profile or that you don't want to know how they work. I searched on the web to find more information on how the old and deprecated OpenGL matrices are implemented and I didn't find anything (except tutorials on how to use them!). My doubt was mainly about the operations order, because I needed to make a C++ implementation of them, maintaining the same exact behavior. I used OpenGL matrices In the past without worrying about how they were implemented, I had a precise idea but now I have to be 100% sure. Even if we know how to implement operations between matrices, the row-column product doesn't have the commutative property so the internal implementation can make the difference. At the end, my question is:
- What is the matrix row-column order and how the product is implemented on OpenGL?
Tired of finding pages saying how they are useless and deprecated now, I had to check by myself the Mesa source code to find what I was searching for:
P = A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B; P = A * B + A * B + A * B + A * B;
where A and B are 4x4 matrices and P is the result of the product. As you can see, this snippet clarifies how rows and columns are internally ordered and how the product is implemented. In conclusion, the OpenGL methods to modify the current matrix are implemented by Mesa in this way:
After only 6 months since I became an Amazon employee, I received this piece of puzzle where it says that I'm an Amazon inventor. I always had ideas in my mind since the early age, so it doesn't surprise me: sooner or later it had to happen.
The piece looks solid, well-made, glittering on my desk. Very nice!
Hi. Since NICE was acquired by Amazon I became part of the Amazon EC2 and its team in the world. Me and my collegues are working hard to improve our High Performance Computing and remote visualization technologies, which basically require advanced C/C++ programming skills and a deep knowledge of the OpenGL libraries. If you meet the requirements and want to be part of our world-class team, check our current offers here:
In addition to the skills listed in the announcements, the candidate must make a moderate use of modern C++ features and third-party dependencies (e.g. the use of high-level frameworks like QT or boost is justified only if it brings real benefits to the project and not to skip programming). know how to manage device contexts, choose / set pixel formats / fbconfigs, destroy / create rendering contexts, set the default frame buffer or FBO as rendering target, use graphics commands to render frames with multiple contexts running on multiple threads, without performance issues. A good knowledge of Desktop OpenGL specifications (from 1.0 to 4.5), deprecation and compatibility mode is required (e.g. the candidate must know that some OpenGL functions can be taken with wgl / glXGetProcAddress instead of using blindly a loading library like glew). If you have concerns or questions, do not hesitate to contact me. Regards.
Recently Microsoft decided to include Xamarin into Visual Studio, also into the free version. This means that from now you can use the C# language with .NET / Monodevelop framework to develop crossplatform applications with the support not only for Windows, Linux and MacOSX, but also for Android and iOS!
Before this news, you had to pay for Xamarin, but now it's free (with certain conditions, visit xamarin.com). If you didn't want to pay for it, the only way you had to support mobile devices was to rely on existing frameworks, like Qt, Unity and Oxygine, or produce extra code with Android SDK and xCode. The problem is that all these solutions use different kind of languages. Qt and Oxigine are C++, Unity is a 3D engine that uses C# scripts, Android SDK and xCode for iOS are mostly Java oriented. If you wanted to support multiple platforms before, you had to change your habits to adopt a solution (even if you didn't like it) to cover an high range of machines. Now you can continue to develop your project with Visual Studio in C# and then decide to convert part of your project to make a mobile app using the same framework, with a little bit of effort for the platform specific features. If you want to develop an app in short time and share it to the world, Xamarin will make your life easier.