Category Archives: Thoughts

More than tutorials or knowledge, this category includes personal thoughts about different arguments, like informatics, science, politics, story, arts or one of the other categories.

It’s hard to continue, but I will

I'm so busy with my actual job that sometimes even if I have free time for updating this web site I'm so tired that I don't have any strength for doing it and I prefer to rest doing something else. I understand that the lack of updates are making me losing all the views but this is something I can't avoid. I'm making this site alive all alone, I'm the programmer, the webmaster, the graphics artist, I have to do everything by myself, and I can do this only in my spare time, because the other part of the time (90% of it) is occupied by the job, private life and other stuff. But it won't be always like this, It's just a bad period. I have so many news to share with you. I'm continuing to program the TextureMind Framework, and finally I completed the core part made of strings, multi-threading, containers, objects, serialization. Now I'm focusing on the graphics part. I'm designing the architecture of a Graphics Context that is designed for making 2D and 3D graphics without knowing that underlying implementation, that could be made with Cairo, OpenGL or Vulkan libraries. The graphics context is natively optimized to minimize the CPU usage and for handling command buffers. It will be used also for drawing the GUI with different implementations and for rendering 3D scenes. I just finished the architecture of the graphics part and I'm pretty satisfied with my job. Now I have to use the Cairo library for drawing the 2D shapes that will be used to draw the GUI. It's not the first time that I implement a brand new GUI from scratch so I know exactly what I have to do. As always, the most important thing is the architecture. As the framework has serialization for primitive 2D shapes and a graphics engine for drawing them, most of the work is done. The difficult part is the creation of all the widgets and the events generated by them. It's important to create just the basic widgets that are useful to compose the full interface, for which: Form windows, Buttons, Radio buttons, Check boxes, Labels, Frames, Scrollbars, Tables, Toolbars, Listviews and Treeviews. Other complex widgets can be created composing other widgets. All the widgets will have support for skinning, animation, non-rectangular shapes and alpha blending. Another difficult part is to handle correctly resources like images and materials. After that, I will continue implementing the 3D part of the engine. The first version of it will make use of Vulkan, as most of the graphics context is studied for making a good use of it. My first target is to finally load a full 3D model with the asset import library and render it with my engine, like I already did in the past with a very old version of the framework. Reaching such a good level will make me able to produce a long series of applications with the full control of them, without heavy frameworks or any other third-party dependencies. This framework has been programmed by scratch in its very components, including strings and containers such as vector, list, map and multimap, so the executable size will be very small. Even the allocation of memory has a custom algoritm to save speed and memory fragmentation when frequent dynamic allocation of small arrays happens. In the future, I want to create the following applications, that will be free: a 3D game; a desktop screen video capture with realtime h264 compression for NVIDIA, AMD, Intel series; a program for video editing; a program for painting; a 3D modeller; a program for digital sculpting. So, as you can see, I'm still full of good ideas for the future, that I will develop through the years, no matter how hard it will be, no matter how time it will require. See next time, stay tuned!

Transaction Fee Bubble. Fees 100 times higher in 1 year. Unconfirmed Transactions. I suspect that Bitcoin will burst

Bitcoin is the first cryptocurrency ever made and the first fork of a cryptocurrency protocol. It was the first to introduce the concept of encryption and decentralization of money. Since it was created in 2009, the protocol is now getting obsolete. The fact that about 150'000 transactions are unconfirmed is the evidence. Bitcoin cannot handle an high traffic of users, because most of the people are starting to have transactions not mined (and stuck) for days or weeks. If PoW already showed its negatives with the high energy cost, now the evidence arrives from the quality of service, when the traffic involves more people and the fees attached to transactions increase to get higher priority.

If the traffic would increase even more, with the current level of difficulty the entire system is doomed to burst. There are other cryptocurrecies that solve most of these problems. Bitcoin is now stronger than other cryptocurrencies because it was the first and it has a larger market capital and a larger user base. However, it doesn't mean that its concept is better and that it will survive in the future. In my opinion, Bitcoin will burst when the system will not be able to handle all the transactions. We can see the first symptoms right away. Check this website:

https://blockchain.info/unconfirmed-transactions;

You can see with your eyes that the number of unconfirmed transactions is very high in this moment, and it will increase in the future with the increment of people making transactions. Most of the transactions are not mined because their fees are too low for the miners that are wasting computational power to get the best profit at the least expense. They wasted millions of dollars to create mining farms with powerful ASICs and graphics cards. They want to mine transactions with the higher fees to have a payback and make even more money. As nobody is interested in wasting an outstanding amount of power to mine transations that have lower fees than others, the transactions with lower fees are rejected until they will be eliminated and created again with higher fees, in an infinite loop of speculation that is creating a Transaction Fee Bubble. As you can see from the above chart, the transation fee average incremented from $0.27 to $27, 100 times in just one year. The fees are growing so fast that transactions will be not convenient anymore. People will migrate their money to less expensive currencies, where transactions will not get unconfirmed so easily. For instance, I'm started to do it right now because I had unconfirmed transactions and no decent way to accelerate them without paying mining pools or other fees. And If I'm doing it, I'm sure that in this moment there are other people that are making the same consideration. So, this is the reason why I believe that Bitcoin is doomed to burst in the future.

Continue reading

Youtube demonetization: Justice or injustice?

Sometimes I watched videos of youtubers that were very angry because youtube slowly demonetized their videos, making their lives harder. They point out very good arguments, but most of the time they want to save their interests. As a person totally out from this business, I was wondering myself if demonetization is good or bad, if youtube is seriously out of mind or if the company has its good reasons for doing it. Justice or injustice?

First of all, let me say that youtube demonetization is not something new. Everybody is talking about it since 2016 but it started on 2012, when youtube automatically demonetized videos with content that was unfriendly for the advertisers, even though youtube started a massive action of demonetization only in the last two years. Of course, most of the youtubers are against demonetization because they use youtube for making money, and not as a free form of expression, unlike 2005, when youtube was a platform that could be used for free cultural exchange of arts, ideas, facts, news, clips, or simply to broadcast yourself.

Continue reading

The world of information technology is always full of possibilities, if you don’t waste your time

I remember a period when I was depressed, it was in 2009. I was born in 1982 when Microsoft started its ascent. I lived the period of MSDOS, Microsoft and IBM as a kid, when everybody talked about informatics and money. The world of computer science was so prosperous and full of promises that I started to follow it when I was just a little kid and after that I discovered that it was one of my biggest passion.

My dream of that time was to become famous with my software, to produce something incredible and to sell it. I started to learn computer programming by myself when I was little and when I finished the high school, I wanted to start my own activity. In the meantime, Bill Gates became the richest man in the world. I lived the period of great promises when small teams could really make money starting from scratch, especially making video games (in the good old times of Doom, Quake and Id Software). However, I had to continue with university and I did not have any time or money to follow my dreams. I continued to program in parallel with everyday commitments with the hope of producing something new, but I directed my energies in the wrong direction. I continued to follow the absurd dream of creating competitive software without the resources of doing it, with the hope that something could change or a miracle could happen. Even if I was a good programmer, I did not have strong knowledges on how to complete a product and make it commercial, or how to start an activity.

Continue reading

Why a team of developers should never waste time programming a new operating system

It may be obvious to many of you, but I saw teams of amateur developers dreaming the perfect operating system, starting from the idea that the contemporary operating systems (like Unix or Windows) are still far from being perfect. In particular, I remember an italian news group that was frequented by more than one developer that wanted to create his brand new operating system starting from scratch, programming it just by himself, so they gave me the inspiration to write this article, maybe it could be useful to avoid that the same disaster will ever happen again to someone else. Even if you are the superman of computer programming, today you cannot pursue the impossible dream of creating your own operating system without hurting yourself for precise technical reasons. I don't want to discuss here about the difficulties related to the creation of a new file system, virtual memory, inter-process communication, multithreading and so on, because my example is easier and more solid than that. I want to assume that you already have programmed a working kernel for that kind of operating system, a "minimum" set of drivers to make it run on your pc and that you are ready to share it with the entire world. Well, even in these ideal conditions, the main problem is with the companies that currently are using Windows or Linux and that should invest money to drop their operating systems / applications to use your operating system, the same for the hardware vendors that should write the specific drivers, the software houses, curstomers, professionals, video gamers and so on. Today there are so many hardware devices that is almost impossible to achieve the same performance that already existing and most proven operating systems have achievied in so many years of existence. It's not a matter of programming skills, it's a matter of "temporal gap". Even if you are so good to achieve the perfection on a single machine, you cannot be able to obtain the same stability on the wide range of existing personal computers, tablets, smart phones, sbc and all the devices mounting all the existing peripherals, because you won't have the money, credibility, reputation, experience, employees, followers, curtomers to do it. The situation in the past was sligtly different, Windows was created to run mainly on x86 family of processors, but there were other operating systems (like Amiga OS) that were projected to run on 680x0 family of processors, so the idea of operating system was more embedded to the small set of hardware that the vendors had to sell. Today it's totally different. If you want to create a valid operating system, you have to cover all the existing hardware produced at least in the past 20 years, or even if your main target is a single device, you cannot surpass the existing operating systems because they are already optimized to work better on the same device in terms of performance and power consumption. In conclusion, if you are having the crazy idea of creating your own operating system, just forget it because you are wasting your time and the opportunity to produce something really useful. You will never produce even an ounce of what is required today to run a modern application on modern hardware, with the same degree of portability and support in terms of graphics / audio / peripherals, and even if you do it, there are already more stable operating systems that are doing the same thing exactly when you are having the bad idea of doing it.

Why loading libraries are dangerous to develop OpenGL applications

OpenGL is not so easy to use. The API exposes thousand of functions that are grouped into extensions and core features that you have to check for every single display driver release or the 3D application may not work. Since OpenGL is a graphics library used to program cool gfx effects without a serious knowledge of the underlying display driver, a large range of developers is tempted to use it regardless of the technical problems. For example, the functions are loaded "automagically" by an external loading library (like glew) and they are used to produce the desired effect, pretending that they are available everywhere. Of course this is totally wrong because OpenGL is scattered into dozens of extensions and core features that are linked to the "target" version that you want to support. Loading libraries like glew are dangerous because they try to load all the available OpenGL functions implemented by the display driver without making a proper check, giving you the illusion that the problem doesn't exist. The main problem with this approach is that you cannot develop a good OpenGL application without taking the following decision:

- How much OpenGL versions and extensions I have to support?

From this choice you can define the graphics aspect of the application and how to scale it to support a large range of display drivers, including the physical hardware and the driver supported by the virtual machines. For example, VirtualBox with guest addictions uses chromium 1.9 that comes with OpenGL 2.1 and GLSL 1.20, so your application won't start if you programmed it using OpenGL 4.5, or even worse you won't start also on graphics cards that support maximum the version 4.4 (that is very recent). For this reason, it's necessary to have a full awareness of the OpenGL scalability principles that must be applied to start on most of the available graphics cards, reducing or improving the graphics quality on the base of the available version that you decided to target. With this level of awareness, you will realize that you don't need any kind of loading library to use OpenGL, but only a good check of the available features, that you can program by yourself. Moreover, libraries like glew are the worst because they are implemented to replace the official gl.h and glext.h header files with a custom version anchored to the OpenGL version supported by that particular glew version.

Continue reading

Gli artisti della scultura digitale

La computer grafica del passato ci ha abituati ad una visione abbastanza artefatta delle ambientazioni virtuali e degli effetti speciali cinematrografici, ai tempi in cui produzioni del calibro di Toy Story e Jurassic Park riuscivano a regalare un'emozione. Un personaggio creato in 3D al computer risultava agli occhi dello spettatore come plasticoso, finto, imperfetto, difficilmente credibile. Per non parlare degli artisti amatoriali, che con limitatissimi cad e programmi di disegno, pur impegnandosi al massimo, non potevano ottenere quel realismo che tanto cercavano, non nella qualità dell'immagine o nella definizione degli oggetti. Tuttavia in questi anni la computer grafica ha fatto dei passi da gigante, e non tanto grazie all'incrementata potenza dei sistemi di elaborazione (come falsamente si pensa) ma per l'effettivo miglioramento delle tecniche, come high dynamic range, radiosity, ma soprattutto un concetto completamente innovativo di modellazione tridimensionale noto a molti con il termine di scultura digitale.

La scultura digitale da un taglio ai vecchi criteri di modellazione dei cad del passato per comprendere che la modellazione di un oggetto dettagliato, composto necessariamente da migliaia, se non milioni di vertici, è più conveniente farla non andando ogni volta a modificare i singoli poligoni ma agendo sulla superficie dell'oggetto come in un programma di paint, potendo cosi modellare nuove forme in modo dinamico, modificare la profondità, scavare, levigare, scolpire, estrudere e smussare come farebbe uno scultore. Inoltre, dato che la maggior parte degli esseri viventi sono simmetrici, con la scultura digitale è possibile rendere simmetriche le proprie modifiche, così da poter costruire facilmente un volto o un corpo umano partendo da una sfera e senza eccessive tribolazioni. Il software più famoso e usato per la scultura digitale è ZBrush.

Continue reading

Tecniche di stampa tridimensionale

Avete mai sognato di poter materializzare alcuni oggetti tridimensionali di vostra creazione? Forse è arrivato il momento di acquistare una stampante 3d! No, non sto scherzando, non tutti lo sanno ma esistono al mondo delle stampanti in grado di fabbricare materialmente degli oggetti con la stessa rapidità di una fotocopiatrice.

Questo significa che avendo tra le mani questa tecnologia potreste progettare un oggetto o scaricarlo direttamente da internet per poi materializzarlo nella realtà: si stanno affermando delle tecnologie valide per rendere reale un oggetto virtuale. C'è da dire però che gli oggetti stampati sono composti da un materiale omogeneo simile alla plastica formato dalla interconnessione di strati di polimeri solidificati... quindi se avevate già progettato di scaricare un cellulare da internet per poi stamparlo potete scordarvelo perchè (per la fortuna dei venditori) attualmente non esistono ancora tecnologie in grado di fare una cosa del genere. Tuttavia stampare oggetti ha molteplici impieghi ed è ampiamente utilizzato nelle aziende per la fabbricazione rapida di prototipi.

High Dynamic Range

Le tradizionali immagini digitali memorizzano l'informazione sul colore per come esso viene riprodotto sullo schermo, utilizzando massimo 256 valori (8-bit) di luminosità per ciascun canale. Anche se questa tecnica è la più diffusa per la visualizzazione di un'immagine, si rivela un disastro per quanto riguarda le precedenti fasi di acquisizione ed elaborazione digitale. Una giornata soleggiata può avere fino a 100.000 toni di luminosità ed una normale fotocamera digitale riduce questa gamma a 256 con una conseguente perdità di informazione e problemi di sovra o sotto esposizione dell'immagine.

Se si scatta una foto sovraesposta le parti bianche vengono totalmente perse in quanto non contengono alcuna informazione (viene proprio eliminata in fase di acquisizione). Fortunatamente le odierne esigenze in campo di fotografia digitale hanno portato alla luce un nuovo modello di memorizzazione chiamato High Dynamic Range (noto come hdr). Questa nuova tecnica consiste nell'assegnare all'energia di ogni raggio luminoso un valore in floating point con un'ampia gamma dinamica secondo le unità di misura della fotometria. Per fare un esempio, al sole vengono assegnati valori di luminosità almeno un milione di volte più grandi rispetto a quelli di un televisore acceso. In quest modo l'hdr risolve tutti i problemi legati al cattivo contrasto ed errata esposizione della maggior parte delle fotografie digitali, in quanto è possibile tramite programmi di fotoritocco decidere la giusta esposizione ed ottenere sempre il risultato qualitativamente migliore, perchè le zone visivamente troppo chiare o troppo scure non perdono mai parte del loro contenuto informativo.

Continue reading

Video sui giochi di nuova generazione

Quando ancora mancava qualche anno al rilascio ufficiale, la Sony fece circolare dei video in cui si mostravano anteprime di videogiochi con una veste grafica da urlo, tanto per dare un'idea di quella che sarebbe stata la potenza computazionale legata al nuovo processore Cell della playstation3. Si diceva che la console avrebbe permesso di gestire addirittura interi mondi... per rincarare la dose, altri video burla mostravano una fantomatica playstation9 del futuro composta solo da nanotecnologie che entrando nel naso del videogiocatore gli si impiantavano nel cervello e lo trasportavano in un'esperienza videoludica uguale alla realtà (non è una mia invenzione, c'è anche il video su youtube). Poi la PS3 fu rilasciata e tutte quelle voci di corridoio si spensero di fronte alla realtà dei fatti ed i videogiochi sul mercato mostrarono una veste grafica mooolto più contenuta di come si era voluto far credere in passato. Un esempio? I vecchi trailer di Motor Storm ne sono la prova più eclatante. Quindi quella rivoluzione, quell'ondata di videogiochi con grafica da cinema e gestione di pianeti in tempo reale, non c'è mai stata, tuttavia adesso, nel 2008, si sta affacciando per pc qualcosa di molto simile.

I dual/quad core, le schede grafiche come la NVidia 8800 e le schede per la gestione della fisica come la PhysX di Ageia, stanno aprendo la strada a tutta una serie di videogiochi che implementano quel salto di qualità al quale ci avevano abituato le promesse non mantenute del passato.

Continue reading

Nuovo sito

Salve a tutti. Benvenuti nella mia nuova pagina personale. Mi sto trasferendo definitivamente dal mio vecchio sito TSRevolution (www.tsrevolution.com), dato che faceva parte di un progetto del passato che non ho più intenzione di continuare. Ho pensato di impostare questo nuovo sito come un blog, tuttavia non sarà come quelle pagine che si usano adesso dove le persone scrivono un diario delle loro faccende personali, ma cercherò di mantenermi sempre sul tema della programmazione di computer, rilasciando sorgenti, progetti e articoli su alcuni argomenti di computer grafica. Sto cercando qualche sito (attinente all'argomento) da linkare, quindi se siete programmatori, avete una pagina e siete interessati potete contattarmi all'indirizzo email gingegneri82@hotmail.it, oppure lasciare un commento a questo post. Uno speciale ringraziamento è rivolto agli autori del WordPress e all'artista che ha progettato l'aspetto grafico di questa pagina.

Buona navigazione!