Category Archives: Programming

All that has to do with programming languages: news, frameworks, software and tutorials.

TextureMind progress #1 – Serialization and log

I continued to program the TextureMind Framework and I'm pretty happy with the result. I wish that this framework will give me the chance to increment the production of my software and to save most of the time (because I don't have it). People told me many times to use already existing frameworks to produce my works, and I tried. Most of them are not suitable for what I want to do, or maybe they have issues with the licenses or simply I don't like them. I want to make something new and innovative, and I feel like I'm about to do it. Let me say that the serialization is a master piece. You can program directly in C++ new classes with a very easy pattern, save and load all the data into four formats: raw (*.raw), interchangable binary (*.tmd), human readable xml (*.xml) and json (*.json).

Continue reading

TextureMind Framework – Work in progress

Version 1 (29/10/2017)

  • Custom set of classes to handle objects and containers (vector, list, map, multimap)
  • Serialization with 4 formats (xml, json, raw and formatted binary) optimized for speed
  • Threads, mutexes, semaphores, atomics
  • Integration with TinyC library to build and execute Just-In time code
  • Custom memory allocation methods to trace leaks and optimize fragmentation
  • Full set of methods to handle streams into files and block of memory

Background

The CJS Framework that I was developing in my spare time now has been changed into TextureMind (TM) Framework. That happened for different kind of reasons. When I created the CJS framework, my first aim was to target multiple languages, using a valid and interchangable way to serialize a full set of classes. That could be useful if you want to program an application in C++ that is meant to run only on pc-desktop (e.g. a server or a map editor) that produces data for applications programmed in C#, Java or Javascript, that are meant to run on mobile devices or web browsers (e.g. clients or videogames). The ambitious purpose was to produce a new brand framework with a full set of functionalities and classes to make it the new standard for computer programming, in general. Well, even if it could be a good idea, it is too ambitious for a single person and, for job reasons, I don't have the time for doing it anymore. I changed what I've done since now to TextureMind Framework, that won't be an open-source project or released but used just by me to produce softwares where interchangability and performance are a problem for the maintainance of the projects. For instance, a simple program like GemFinder could have benefited if I had used this kind of framework to develop it. Even if ported to mobile devices with C#, I could easly port the internal format to save settings and images with the already implemented set of classes for multiple languages. Moreover, during my experience I have understood how heavy the already existing frameworks are in terms of amount of code and external dependencies, so I want to create something lighter that could equally satisfy my expectations and eventually evolve into a set of solutions to facilitate the productions of projects for Computer Vision, 3D engines and videogames.

Continue reading

Why a team of developers should never waste time programming a new operating system

It may be obvious to many of you, but I saw teams of amateur developers dreaming the perfect operating system, starting from the idea that the contemporary operating systems (like Unix or Windows) are still far from being perfect. In particular, I remember an italian news group that was frequented by more than one developer that wanted to create his brand new operating system starting from scratch, programming it just by himself, so they gave me the inspiration to write this article, maybe it could be useful to avoid that the same disaster will ever happen again to someone else. Even if you are the superman of computer programming, today you cannot pursue the impossible dream of creating your own operating system without hurting yourself for precise technical reasons. I don't want to discuss here about the difficulties related to the creation of a new file system, virtual memory, inter-process communication, multithreading and so on, because my example is easier and more solid than that. I want to assume that you already have programmed a working kernel for that kind of operating system, a "minimum" set of drivers to make it run on your pc and that you are ready to share it with the entire world. Well, even in these ideal conditions, the main problem is with the companies that currently are using Windows or Linux and that should invest money to drop their operating systems / applications to use your operating system, the same for the hardware vendors that should write the specific drivers, the software houses, curstomers, professionals, video gamers and so on. Today there are so many hardware devices that is almost impossible to achieve the same performance that already existing and most proven operating systems have achievied in so many years of existence. It's not a matter of programming skills, it's a matter of "temporal gap". Even if you are so good to achieve the perfection on a single machine, you cannot be able to obtain the same stability on the wide range of existing personal computers, tablets, smart phones, sbc and all the devices mounting all the existing peripherals, because you won't have the money, credibility, reputation, experience, employees, followers, curtomers to do it. The situation in the past was sligtly different, Windows was created to run mainly on x86 family of processors, but there were other operating systems (like Amiga OS) that were projected to run on 680x0 family of processors, so the idea of operating system was more embedded to the small set of hardware that the vendors had to sell. Today it's totally different. If you want to create a valid operating system, you have to cover all the existing hardware produced at least in the past 20 years, or even if your main target is a single device, you cannot surpass the existing operating systems because they are already optimized to work better on the same device in terms of performance and power consumption. In conclusion, if you are having the crazy idea of creating your own operating system, just forget it because you are wasting your time and the opportunity to produce something really useful. You will never produce even an ounce of what is required today to run a modern application on modern hardware, with the same degree of portability and support in terms of graphics / audio / peripherals, and even if you do it, there are already more stable operating systems that are doing the same thing exactly when you are having the bad idea of doing it.

Targeting OpenGL is not so easy, don’t get confused by the documentation

I want to create this post to clarify once and for all how the OpenGL extensions mechanism works and the correct proceedings to target OpenGL versions. I named this article in this way because OpenGL are generally bad documented (or difficult to understand) and OpenGL.org wiki makes the things worse. For example, several people got confused by this page:

https://www.opengl.org/wiki/OpenGL_Extension#Core_Extensions

Targeting OpenGL 2.1

These are useful extensions when targeting GL 2.1 hardware. Note that many of the above extensions are also available, if the hardware is still being supported. These represent non-hardware extensions introduced after 2.1, or hardware features not exposed by 2.1's API. Most 2.1 hardware that is still being supported by its maker will provide these, given recent drivers.

And this document:

https://www.opengl.org/registry/specs/ARB/map_buffer_range.txt

"New Procedures and Functions

void *MapBufferRange( enum target, intptr offset, sizeiptr length,
bitfield access );

void FlushMappedBufferRange( enum target, intptr offset, sizeiptr length );

Issues

(1) Why don't the new tokens and entry points in this extension have
"ARB" suffixes like other ARB extensions?

RESOLVED: Unlike a normal ARB extension, this is a strict subset of functionality already approved in OpenGL 3.0. This extension exists only to support that functionality on older hardware that cannot implement a full OpenGL 3.0 driver. Since there are no possible behavior changes between the ARB extension and core features, source code compatibility is improved by not using suffixes on the extension."

so the question is:

- GL_ARB_map_buffer_range is a core extension or not?

Continue reading

How to parse gl.xml and produce your own loading library

In the previous article I emphasized the importance of not having a third-party loading library like glew because OpenGL is too complex and unpredictible. For example, if you want to implement a videogame with an average graphics and a large audience of users, probably OpenGL 2.1 is enough. At this point, you may need to load only that part of the library and make the right check of the extensions or just use the functions that have been promoted to the core of the current version. Remember that an extension is not guaranteed to be present on that version of OpenGL if it's not a core feature and this kind of extensions has been introduced after 3.0 to maintain the forward compatibility.

For instance, it's useful to check the extension GL_ARB_vertex_buffer_object only on OpenGL 1.4 (in that case you may want to use glBindBufferARB instead of glBindBuffer) but not on superior versions because it has been promoted to the core from the version 1.5 onward. The same applies to other versions of the core and extensions. If you target OpenGL 2.1, you have to be sure that the extensions tipically used by 2.1 applications have not been promoted to the latest OpenGL 4.5 version and to check the extenions on previous versions of the library, making sure to use the appropriate vendor prefix, like ARB. Even if with glew you can make this kind of check before using the loaded functions, I don't recommend it because glewInit() is going to load also parts that you don't want to use and you run the risk to understimate the importance of checking the capabilities.

Anyway, reading the OpenGL spec and add manually the required extensions is a time expensive job that you may don't have the time to do. Recently, the Khronos group has released an xml file where there is a detailed description of the extensions and the functions for every version of the library, it is also used to generate the gl.h and the glext.h header files with a script in Python. In the same way, you can program a script that parses the gl.xml file to generate your own loading library, making the appropriate check of the extensions and including only the part that you really need to load on your project. You can find the gl.xml file here:

Continue reading

Why loading libraries are dangerous to develop OpenGL applications

OpenGL is not so easy to use. The API exposes thousand of functions that are grouped into extensions and core features that you have to check for every single display driver release or the 3D application may not work. Since OpenGL is a graphics library used to program cool gfx effects without a serious knowledge of the underlying display driver, a large range of developers is tempted to use it regardless of the technical problems. For example, the functions are loaded "automagically" by an external loading library (like glew) and they are used to produce the desired effect, pretending that they are available everywhere. Of course this is totally wrong because OpenGL is scattered into dozens of extensions and core features that are linked to the "target" version that you want to support. Loading libraries like glew are dangerous because they try to load all the available OpenGL functions implemented by the display driver without making a proper check, giving you the illusion that the problem doesn't exist. The main problem with this approach is that you cannot develop a good OpenGL application without taking the following decision:

- How much OpenGL versions and extensions I have to support?

From this choice you can define the graphics aspect of the application and how to scale it to support a large range of display drivers, including the physical hardware and the driver supported by the virtual machines. For example, VirtualBox with guest addictions uses chromium 1.9 that comes with OpenGL 2.1 and GLSL 1.20, so your application won't start if you programmed it using OpenGL 4.5, or even worse you won't start also on graphics cards that support maximum the version 4.4 (that is very recent). For this reason, it's necessary to have a full awareness of the OpenGL scalability principles that must be applied to start on most of the available graphics cards, reducing or improving the graphics quality on the base of the available version that you decided to target. With this level of awareness, you will realize that you don't need any kind of loading library to use OpenGL, but only a good check of the available features, that you can program by yourself. Moreover, libraries like glew are the worst because they are implemented to replace the official gl.h and glext.h header files with a custom version anchored to the OpenGL version supported by that particular glew version.

Continue reading

How the deprecated OpenGL matrix model works

Even if nowadays everybody seems to drop OpenGL methods when they are deprecated on the core profile, it doesn't mean that you don't need to use them in compatibity profile or that you don't want to know how they work. I searched on the web to find more information on how the old and deprecated OpenGL matrices are implemented and I didn't find anything (except tutorials on how to use them!). My doubt was mainly about the operations order, because I needed to make a C++ implementation of them, maintaining the same exact behavior. I used OpenGL matrices In the past without worrying about how they were implemented, I had a precise idea but now I have to be 100% sure. Even if we know how to implement operations between matrices, the row-column product doesn't have the commutative property so the internal implementation can make the difference. At the end, my question is:

- What is the matrix row-column order and how the product is implemented on OpenGL?

Tired of finding pages saying how they are useless and deprecated now, I had to check by myself the Mesa source code to find what I was searching for:

P = A * B;

P[0] = A[0] * B[0] + A[4] * B[1] + A[8] * B[2] + A[12] * B[3];
P[4] = A[0] * B[4] + A[4] * B[5] + A[8] * B[6] + A[12] * B[7];
P[8] = A[0] * B[8] + A[4] * B[9] + A[8] * B[10] + A[12] * B[11];
P[12] = A[0] * B[12] + A[4] * B[13] + A[8] * B[14] + A[12] * B[15];

P[1] = A[1] * B[0] + A[5] * B[1] + A[9] * B[2] + A[13] * B[3];
P[5] = A[1] * B[4] + A[5] * B[5] + A[9] * B[6] + A[13] * B[7];
P[9] = A[1] * B[8] + A[5] * B[9] + A[9] * B[10] + A[13] * B[11];
P[13] = A[1] * B[12] + A[5] * B[13] + A[9] * B[14] + A[13] * B[15];

P[2] = A[2] * B[0] + A[6] * B[1] + A[10] * B[2] + A[14] * B[3];
P[6] = A[2] * B[4] + A[6] * B[5] + A[10] * B[6] + A[14] * B[7];
P[10] = A[2] * B[8] + A[6] * B[9] + A[10] * B[10] + A[14] * B[11];
P[14] = A[2] * B[12] + A[6] * B[13] + A[10] * B[14] + A[14] * B[15];

P[3] = A[3] * B[0] + A[7] * B[1] + A[11] * B[2] + A[15] * B[3];
P[7] = A[3] * B[4] + A[7] * B[5] + A[11] * B[6] + A[15] * B[7];
P[11] = A[3] * B[8] + A[7] * B[9] + A[11] * B[10] + A[15] * B[11];
P[15] = A[3] * B[12] + A[7] * B[13] + A[11] * B[14] + A[15] * B[15];

where A and B are 4x4 matrices and P is the result of the product. As you can see, this snippet clarifies how rows and columns are internally ordered and how the product is implemented. In conclusion, the OpenGL methods to modify the current matrix are implemented by Mesa in this way:

Continue reading

Do you want to work for NICE/Amazon? We are hiring!

Hi. Since NICE was acquired by Amazon I became part of the Amazon EC2 and its team in the world. Me and my collegues are working hard to improve our High Performance Computing and remote visualization technologies, which basically require advanced C/C++ programming skills and a deep knowledge of the OpenGL libraries. If you meet the requirements and want to be part of our world-class team, check our current offers here:

https://www.amazon.jobs/it/locations/asti-italy

In addition to the skills listed in the announcements, the candidate must make a moderate use of modern C++ features and third-party dependencies (e.g. the use of high-level frameworks like QT or boost is justified only if it brings real benefits to the project and not to skip programming). know how to manage device contexts, choose / set pixel formats / fbconfigs, destroy / create rendering contexts, set the default frame buffer or FBO as rendering target, use graphics commands to render frames with multiple contexts running on multiple threads, without performance issues. A good knowledge of Desktop OpenGL specifications (from 1.0 to 4.5), deprecation and compatibility mode is required (e.g. the candidate must know that some OpenGL functions can be taken with wgl / glXGetProcAddress instead of using blindly a loading library like glew). If you have concerns or questions, do not hesitate to contact me. Regards.

Develop in C# for mobile with Xamarin, now it’s free!

Recently Microsoft decided to include Xamarin into Visual Studio, also into the free version. This means that from now you can use the C# language with .NET / Monodevelop framework to develop crossplatform applications with the support not only for Windows, Linux and MacOSX, but also for Android and iOS!

Before this news, you had to pay for Xamarin, but now it's free (with certain conditions, visit xamarin.com). If you didn't want to pay for it, the only way you had to support mobile devices was to rely on existing frameworks, like Qt, Unity and Oxygine, or produce extra code with Android SDK and xCode. The problem is that all these solutions use different kind of languages. Qt and Oxigine are C++, Unity is a 3D engine that uses C# scripts,  Android SDK and xCode for iOS are mostly Java oriented. If you wanted to support multiple platforms before, you had to change your habits to adopt a solution (even if you didn't like it) to cover an high range of machines. Now you can continue to develop your project with Visual Studio in C# and then decide to convert part of your project to make a mobile app using the same framework, with a little bit of effort for the platform specific features. If you want to develop an app in short time and share it to the world, Xamarin will make your life easier.

Super Mario Bros for Amstrad CPC 464

When i was a little kid I remember that i really wanted to create a Super Mario Bros game for the amstrad cpc 464. Now that I am 33 and I work as a software engineer I asked myself: why don't you make your old dream come true? :) Finally I found the time to create a demo with the famous first Level 1-1 of Super Mario Bro:

The horizontal hardware scrolling needs a double buffer in order to get an accuracy of 4 pixels. The demo runs on original Amstrad CPC 464 speed emulated by Caprice. It is pretty fast and can loop horizontally with a limit of 512 tiles meanwhile the level 1-1 takes only 212 tiles. I readjusted the original smb graphics to fit a 256x192 Mode 1 with 4 colors. I really like the effect of the gray scale map mixed with the blue sky, like in the original NES game. This demo has been programmed with SDCC in C and Z80 assembly.

Continue reading

CJS Framework – Abandoned

The project is marked as abandoned because:

  • The name CJS Framework was already used in the context of other frameworks, I don't like it anymore and it doesn't mirror the current purpose of the framework
  • I don't have time and resources to achieve the ambitious requirements that I decided in the past (supporting interchangable classes between four languages and make it a standard is too expensive. I cannot open an open source project because it is not convenient for my current job, too much implications)
  • The framework evolved into different directions. I changed the name to TM Framework (TextureMind Framework) that is a useful collection of classes to develop my own programs in C++ or C# when the projects will be so complicated to require it. Being a framework slave in never a good thing for the developers, I saw developers becoming incapable of doing the simplest things in C starting from scratch, when they were anchored on Qt, Boost, Unity, Unreal Engine 4 or their own frameworks. I saw also good programmers becoming incapable of doing good plain programming (not even a pacman game) without projecting frameworks that would require years to be finished, so I don't want to feed this trend.

After a long research for the perfect crossplatfrom multitarget language i didn't find anything that satified my expectations. I wanted to create c++, c#, java and javascript projects around a set of basic classes, functionalities and serialization to import/export files in a custom format.

In a first evaluation, Haxe language with json seemed to be the perfect choise, but after using it i realized that the code produced for the languages of my interest is too heavy, bad indented and hard to reuse in the context of a specific project. That's because Haxe and the other multitarget languages are studied to produce a final result (i.e. a game) and not a source code that is easy to understand for the human reader or to include in other part of a wide project. For instance, if i create an Hello world example with haxe and convert it to c#, it will generates alot of useless .cs files in a well structured (and unreadable) project that you can build and run at first attempt, but that is barely readable and hard to reuse into other clean c# projects. If you are thinking of creating a library in haxe to include the source code into your projects, it's better if you change idea: the final result of your cpp, c#, java or javascript projects may end to a gigantic mess.

For this reason, I decided to create from scratch a new framework that is called CJS Framework, which stands for CppJavaScriptSharp Framework. It's mainly a full bridge between these four languages (but it's not improbable that it will support other languages in the future). The basic idea is very simple: make a set of classes that are useful in a project and a serialization system that easly load objects regardless of the used language. For instance, we can decide to take advantage of the .NET framework and produce an editor in C# to manipulate the maps of a game that will be programmed in javascript and it will run on facebook or the web browser. Or a server application in c++ that communicates in a binary format with a client application for html5 coded in javascript. The framework is projected to satify easly all these tasks and much more.

Continue reading

Unrelated Engine – Deferred Rendering and Antialiasing

I tried to implement the explicit multisample antialiasing and I got good results, but it's slow on a GeForce 9600GT. A scene of 110 fps became 45 fps with only four samples, just to point out the slow down. While I was jumping to the ceiling for the amazing image quality of a REAL antialiasing with deferred shading (not the fake crap called FXAA) I fell down to the floor after I seen the fps, what a shame.

UnrelatedEngine01

Anyway, I decided to change from a deferred shading to a deferred lighting model just to implement a good trick in order to use the classic multisample (that in my card can do pretty well also with 16 samples!) reading from the light accumulation buffer in the final step and writing the geometry to the screen with the antialiasing enabled. The result is a little weird, but you can fix it by using that crap fxaa on the light accumulation buffer which is smoother than the other image components. For example, I can use: a mipmapping or anisotropic filtering to eliminate the texture map aliasing, a FXAA to eliminate the light accumulation buffer aliasing and finally a MSAA to eliminate the geometry aliasing.

ps: I used the nanosuit model from this site: www.gfx-3d-model.com/2009/09/nanosuit-3d-model/

TSREditor – Last screenshots

TSREditor is a huge editor in the style of Blender3D  projected to create or edit the resources like textures, 3d models, sound and levels for games and other stuff.

TSREditor07_l

In spite of the large amount of work needed to reach a decent version of this software, I decided that it will be free as a part of my Unrelated Framework. In this moment I'm far from a decent beta version to release, but I can show you two nice screenshots of the program at work.

TSREditor06_l

Unrelated Engine – Work in progress

This is the first "work in progress" video of my 3d engine called Unrelated Engine. Some complex animated models come from Doom 3. They were converted to a maximum of 4 weights per vertex as well as the identical vertices have been cancelled to improve the speed. The shader language was used to obtain a large amount of skinned meshes and complex materials with a reasonable speed.

The 3d models are from doom3 and from http://www.models-resource.com/, they were used only to test my engine and to make this video. The rights of these 3d models and the music are reserved by their respective authors.

Unrelated Engine – A nice test with Mario 64 map

I created a nice video with an engine that I'm still developing and that is part of my Unrelated Framework. It uses:

- OpenGL (to draw the graphics)
- DevIL (to import images)
- Assimp (to import 3d models)

The 3d models are from http://www.models-resource.com/, they were used only to test my engine and to make this video. The rights of these 3d models and the music are reserved by their respective authors.