Today it's full of people still convinced that they can become rich with cryptocurrency. Since cryptocurrency showed its real face during the last few months, they are dreaming something that will never happen. Maybe you are one of those people, so I'm sorry because there are bad news for you. The only people that became rich with cryptocurrency were the ones that bought them years ago, when nobody knew about that. People that discovered it later are paying to make other people even richer. At the end of the day, cryptocurrency is just a trend, and like the other trends it will fade soon when people will understand that the chance to make money are very few. After that, maybe cryptocurrency will be used for what it was projected for: making anonymous trades. Crypto can work only for that category of person, for common people it's just rubbish. If you lose your wallet, you are fucked. If you make a mistake, hackers can rob your money very easly and there is nothing that you can do. The exchange platforms like Bittrex, Binance, Bitfinex, Coinbase, and so on, are robbed every month for amounts of hundreds of millions of dollars (the last one was 500 millions), so your money are never safe. You can buy an hardware wallet to make your money safer, but also in this case, you can lose the wallet, you can break the wallet, the value of the currency can fall down, the wallet could become useless if you need to move to another crypto for if the price is falling down. And what about mining? Tons of energy wasted to calculate useless hash codes, just to make the transactions work for a tiny reward and a huge waste of power. This system is so corrosive that is not suitable for normal people, but for mining companies that are buying all the graphics cards, creating a black hole in the hardware market. Thousands of cryptocurrency projects start and die to make easy money with pump&dump strategies, a game where only the founders and few people can make real money, the other ones are doomed to lose everything. And people are still convinced that crypto is amazing because a bunch of people became rich in the past and they believe that it will happen again in their wallet, yeah, with the same probability of winning the lottery. People are blind, they really need to wake up. This system is so full of speculation and issues that will never replace legal money, the market capital will never be 10, 100 or 1000 times the current one, so every long-term investment will be useless because the market is consolidated now and the new investments will make richer who owns huge quantity of crypto that they can sell when the price is high again, and so on. Do you want to waste money? Buy $1000 of crypto, now. It's like gambling. You are risking $1000 just to make $2000 at best. It's more probable that you won't earn anything or lose half of the money. And do you know what is funny? Thousands of people are playing this game, right now. They are wasting time and money.
In the past cryptocurrency was the easiest and safest way to make money. The only thing you needed to do was to buy bitcoins and wait for the price to go up, or mine the cryptocurrency with a laptop. Plain and simple. Only few people took advantage from that because it was almost unknown. For years bitcoin grew exponentially without a significant loss, so the big promise was that the trend should continue without an end, making everybody rich to the eternity. When people started to think that, it was the beginning of the end. People started to buy bitcoin or other altcoins without any logic or strategy, with the false promise of becoming rich. This created a dramatic grow fo the market capital and volume.
When the capital was enough to make rich most of the people that years before invested on it, they started to sell, also because the risk of holding was too high. This created an incredible loss in the market capital against any optimistic forecast. However, this phenomenon is very common and it's known with the term of "consolidation". When the trend grow too much along the volume in a short period of time, the price is doomed to consolidate. As you can see on the above chart, the moment when bitcoin started to grow rapdly is when the mass media have spread the news that bitcoin exists and that it can be bought to become rich in short time. However, buying something when the price or the volume is so high is always a bad idea. People that bought bitcoin when it was $20,000, lost about half of the money in about 1 month. Another false promise is that bitcoin will grow again, reaching a quote of $500,000, making the investment still worth in a long-term. This could be false for different kind of reasons. Bitcoin has an old protocol that is already suffering of unconfirmed transactions because miners want to speculate on the transaction fees. Moreover, other altcoins like ethereum, ripple, dash, litecoin, status, eos and so on, are starting to consolidate in the same way.
Bitcoin is the first cryptocurrency ever made and the first fork of a cryptocurrency protocol. It was the first to introduce the concept of encryption and decentralization of money. Since it was created in 2009, the protocol is now getting obsolete. The fact that about 150'000 transactions are unconfirmed is the evidence. Bitcoin cannot handle an high traffic of users, because most of the people are starting to have transactions not mined (and stuck) for days or weeks. If PoW already showed its negatives with the high energy cost, now the evidence arrives from the quality of service, when the traffic involves more people and the fees attached to transactions increase to get higher priority.
If the traffic would increase even more, with the current level of difficulty the entire system is doomed to burst. There are other cryptocurrecies that solve most of these problems. Bitcoin is now stronger than other cryptocurrencies because it was the first and it has a larger market capital and a larger user base. However, it doesn't mean that its concept is better and that it will survive in the future. In my opinion, Bitcoin will burst when the system will not be able to handle all the transactions. We can see the first symptoms right away. Check this website:
You can see with your eyes that the number of unconfirmed transactions is very high in this moment, and it will increase in the future with the increment of people making transactions. Most of the transactions are not mined because their fees are too low for the miners that are wasting computational power to get the best profit at the least expense. They wasted millions of dollars to create mining farms with powerful ASICs and graphics cards. They want to mine transactions with the higher fees to have a payback and make even more money. As nobody is interested in wasting an outstanding amount of power to mine transations that have lower fees than others, the transactions with lower fees are rejected until they will be eliminated and created again with higher fees, in an infinite loop of speculation that is creating a Transaction Fee Bubble. As you can see from the above chart, the transation fee average incremented from $0.27 to $27, 100 times in just one year. The fees are growing so fast that transactions will be not convenient anymore. People will migrate their money to less expensive currencies, where transactions will not get unconfirmed so easily. For instance, I'm started to do it right now because I had unconfirmed transactions and no decent way to accelerate them without paying mining pools or other fees. And If I'm doing it, I'm sure that in this moment there are other people that are making the same consideration. So, this is the reason why I believe that Bitcoin is doomed to burst in the future.
I continued to program the TextureMind Framework and I'm pretty happy with the result. I wish that this framework will give me the chance to increment the production of my software and to save most of the time (because I don't have it). People told me many times to use already existing frameworks to produce my works, and I tried. Most of them are not suitable for what I want to do, or maybe they have issues with the licenses or simply I don't like them. I want to make something new and innovative, and I feel like I'm about to do it. Let me say that the serialization is a master piece. You can program directly in C++ new classes with a very easy pattern, save and load all the data into four formats: raw (*.raw), interchangable binary (*.tmd), human readable xml (*.xml) and json (*.json).
I don' t know how many people already had important bitcoin transactions stuck into the UNCONFIRMED state for days or weeks. This should happen if you put a low fee into your transaction, but basically it can happen anyways. The fees are taken by miners for the mining work. Mining is the computational power required for the system to survive. The system gives an award to convince people to use powerful machines for mining. Mining has been made difficult on purpose with the PoW (Proof of Work) to avoid that everybody could become rich in a moment with mining (and the currency useless). Basically, to mine transactions your computer must generate hash codes. The system requires a particular kind of hash code, for instance on Bitcoin the system requires a certain number of zeroes inside the code. If the condition is not respected, the code is rejected and you can add a redundant header into the payload, that is called "nonce" and that is used to generate a different hash code from the same content. The code is generated until the system accept it and the miner can get the fee only if the block is mined. The condition that must be satisfied determines the level of difficulty, that is tipically measured with the number of attempts required for an hash code before being accepted. In this moment, the difficulty level of Bitcoin is 1,590,896,927,258. This is the reason why transactions require expensive machines like Antminer S9 or a mining rig of eight GeForce 1080 Ti. Just to make $24 per day you need to generate 14 TH/s with a consumption of 1350 W, just because who created this system had a misconcept of freedom and power. Even if you may find the idea of mining fantastic to make easy money, the mining itself is one of the worst concept ever created in the history of computer science, a total waste of energy and the reason why your transactions may get stuck into an UNCONFIRMED state for days or weeks, because the system is not efficient at all.
If you have just discovered cryptocurrency and you want to buy a powerful machine for mining, abandon the idea because it's a waste of money. First of all, the hardware is too expensive and too loud. You can buy an Antminer S9 for $1700 just to mine $24 of bitcoins per day and it's not even pratical. The machine consumes about 1350W and makes a noise of 75 Db. If you are a normal person living in apartament, just forget it. You should buy a sound-proof server rack for $1000 to reduce the noise and make the machine run day and night. In this conditions, the lifetime of the device is reduced drastically, the machine could die after only 2 years. If you make $24 per day, in one month is $24 x 30 = $720. If you believe that with GeForce 1080 Ti the things get better, you are wrong. In this case, a single GPU can mine less than $4 per day, running day and night. With 8 GPUs you have a decent noise but a power consumption of 250W x 8 = 2000W. You must revise your energy contract to have at least 4.5 kWh if you want to live with other people in your apartament. If you make an investment of $6000, you have to wait 6000 / 32 = 187.5 days, about 6 months to have your money back, and this time can be considered a decade in the cryptocurrency world. Moreover, if you buy hardware to mine cryptocurrency, even if the value can grow, the difficulty will grow at the point to make the mining almost impossibile. It's what happened to Bitcoin in short time. What you could mine in the past with GH/s, now it requires TH/s. So, if you buy an Antminter S9, you have no guarantees that the machine will not break or that it will not become obsolete like its predecessors, the same could happen with the graphics cards that become obsolete in few months. I remember when I bought a GeForce GTX 670 when it was one of the most powerful in that period, and now it can do only 60 H/s with Equihash, where you need 700 H/s to make less than $4. Imagine if the same would happen to your 8 GeForces 1080 Ti, it would be a disaster.
I'm not saying that mining is useless, mining is what maintain most of the cryptocurrencies alive. I'm saying that mining is an investment that requires the right equipment, years of experience and efforts, it's not something that everybody can do at home. I'm reading stories of unexperienced people that are literally wasting their time and money on that. A 21 years old kid said on twitter that ordered five Antminer S9. I hope for him that he doesn't live anymore with his parents but he lives in a house with an energy contract of 10 kWh and a basement without neighbours in a range of 100 meters. It's pretty sad to read bullshits around an argument that is pretty interesting. It's sad to read that people still consider cloud mining like Genesis a good alternative. Genesis was good in the past but not anymore. The company bought machines with the money of people that made convenient contracts to mine bitcoins, and now they bought a shit load of graphics cards (that are easier to maintain) with expensive fees for the customers that are mining. If you make your math, mining with Genesis is not convenient at all. You pay $7200 for a contract of 2 years to make 34 bucks per day with Monero. In the best scenario, you will see your money back in 7 months. During this period, they will give you dollars instead of monero. You have no guarantees that the difficulty will not grow even more. You don't have any guarantees at all. And at this price, you can buy a mining rig of 10 GeForce 1080 Ti to mine whatever you want without a time limit of 2 years, and in the worst scenario you can sell the graphics cards.
With trading you can make actual money in the shortest time possible without an heavy investment. My idea is: if you have money to waste, just do trading. To make a comparison, mining needs a loud machine of $3000 to make only $24. With trading and the same amount of money, you can make $1500 in one day. Yes, you are reading right. The only thing that you need to do is to trade with an increment of 50%, and it's not difficult at all. You have to be patient, smart and intuitive. Bitcoin is stuck to 11000 now. It's not a problem. There are hundreds of cryptocurrencies that you can use for trading that are supported by free trading platforms on the internet, like poloniex. In the last week, IOTA grew of 150%. With $3000, you could have in your wallet $7500 in one week vs. $168 with mining. And it's just one week without trading during the gain and loss. Imagine what you can do with fluctuations of 20% or 50%, day and night. Let's do some math. With $3000, if you guess 10 times an increment of 20%, you can make 3000 * 1.2 ^ 10 = $18575, that is about one month of lazy trading. Mining is a no brainer that can give you a little money after months or years, but with your brain there is no limit in what you can do.
Yesterday was a crazy day for bitcoin and cryptocurrency in general. After a rapid peak of $11000 it dropped drastically to $9500, then it grew again to $10000 and now it's stable to $10300. Also other cryptocurrencies (except dash and few ones) had a bad day, with a loss from about 5% to 30%. As consequence, bitcoin arrested its crazy run for $100,000 and it didn't reach $12000 as predicted yesterday. Probably this event is not that bad if you need more time to decide how to invest on bitcoin and find the best way to make money.
If bitcoin is too stable for your scope, you can decide to exchange versus other cryptocurrencies that are growing more rapdly in shortest time, making attention to not lose too much during the transaction. Even if bitcoin has an high value, it's not important if you don't have already bitcoins and if you want to make money with trading, the high value will only make it more difficult to buy.
I think that anyone has heard about cryptocurrency and bitcoin in the past, but I doubt that everybody knows the crazy world behind it. Everybody knows that bitcoin can be used instead of money to do online payments that cannot be traced. But when I talk to people, even if they work in the field of information technology, nobody seems to know the very basics of cryptocurrency, how it works, how much is a bitcoin, how you can buy it, what are altcoins, how you can convert them to money or to different altcoins, how new coins are introduced, the fact that bitcoin increased exponentially in the last year and that it will grow even more.
Every year in informatics is characterized by huge events. Youtube was born in 2005. Facebook started the story of social networks in 2006, followed by Twitter, Instagram and Google+. But 2017 is the year of cryptocurrency. When Bitcoin was launched in 2009, the first transaction was 100 bitcoins to buy a pizza, then other coins were introduced into the system and the value of a single coin improved a lot, even though you could buy bitcoins for way less than one dollar in 2009. Just to make you understand, in this moment, 1 bitcoin is $9500. If in the past you bought 100 bitcoins for 0.30 dollars, now you would have had $950,000, about one million of dollars. Imagine when bitcoins were used mostly to make illegal transactions, mostly in the world of deep web, and imagine that those people, unlike me, knew everything about the world of bitcoins since 2009 and now they are filty rich. But jokes apart, the news that bitcoin improved exponentially is recent and it is spreading all over the world. The latest bitcoin forecasts say that bitcoin should increase even more in the future, but the predictions may diverge. Somebody says that the bubble will explode soon while other people say that, on the base of the current trend, it should reach about $50,000 in 2020. So, even if bitcoin is the heaviest coin on the market of cryptocurrency and it's reaching its maximum saturation in terms of circulating coins, it should increase even more in the future. So bitcoin and cryptocurrency in general are becoming in the understanding of people a great investment to make huge quantity of money. You don't have to do a boring job every day of your life, the only thing that you need to do is: buy $30'000 of bitcoins, wait 2-3 years, make $150'000 of bitcoins: that's it. Another thing to make profits is to exchange from one currency to another, to take advantage from the market fluctuations. In this case, you don't even need that the value improves, but you can make money through wise choices. You could sell altcoins when the value is rapdly decreasing and buy them again when the value is high again, doing what is called "trading". Now that the news is spreading along its huge promises, everybody is buying bitcoins or altcoins to make profits from fluctuations and trading. But that's not all.
Sometimes I watched videos of youtubers that were very angry because youtube slowly demonetized their videos, making their lives harder. They point out very good arguments, but most of the time they want to save their interests. As a person totally out from this business, I was wondering myself if demonetization is good or bad, if youtube is seriously out of mind or if the company has its good reasons for doing it. Justice or injustice?
First of all, let me say that youtube demonetization is not something new. Everybody is talking about it since 2016 but it started on 2012, when youtube automatically demonetized videos with content that was unfriendly for the advertisers, even though youtube started a massive action of demonetization only in the last two years. Of course, most of the youtubers are against demonetization because they use youtube for making money, and not as a free form of expression, unlike 2005, when youtube was a platform that could be used for free cultural exchange of arts, ideas, facts, news, clips, or simply to broadcast yourself.
I remember a period when I was depressed, it was in 2009. I was born in 1982 when Microsoft started its ascent. I lived the period of MSDOS, Microsoft and IBM as a kid, when everybody talked about informatics and money. The world of computer science was so prosperous and full of promises that I started to follow it when I was just a little kid and after that I discovered that it was one of my biggest passion.
My dream of that time was to become famous with my software, to produce something incredible and to sell it. I started to learn computer programming by myself when I was little and when I finished the high school, I wanted to start my own activity. In the meantime, Bill Gates became the richest man in the world. I lived the period of great promises when small teams could really make money starting from scratch, especially making video games (in the good old times of Doom, Quake and Id Software). However, I had to continue with university and I did not have any time or money to follow my dreams. I continued to program in parallel with everyday commitments with the hope of producing something new, but I directed my energies in the wrong direction. I continued to follow the absurd dream of creating competitive software without the resources of doing it, with the hope that something could change or a miracle could happen. Even if I was a good programmer, I did not have strong knowledges on how to complete a product and make it commercial, or how to start an activity.
Version 1 (29/10/2017)
- Custom set of classes to handle objects and containers (vector, list, map, multimap)
- Serialization with 4 formats (xml, json, raw and formatted binary) optimized for speed
- Threads, mutexes, semaphores, atomics
- Integration with TinyC library to build and execute Just-In time code
- Custom memory allocation methods to trace leaks and optimize fragmentation
- Full set of methods to handle streams into files and block of memory
It may be obvious to many of you, but I saw teams of amateur developers dreaming the perfect operating system, starting from the idea that the contemporary operating systems (like Unix or Windows) are still far from being perfect. In particular, I remember an italian news group that was frequented by more than one developer that wanted to create his brand new operating system starting from scratch, programming it just by himself, so they gave me the inspiration to write this article, maybe it could be useful to avoid that the same disaster will ever happen again to someone else. Even if you are the superman of computer programming, today you cannot pursue the impossible dream of creating your own operating system without hurting yourself for precise technical reasons. I don't want to discuss here about the difficulties related to the creation of a new file system, virtual memory, inter-process communication, multithreading and so on, because my example is easier and more solid than that. I want to assume that you already have programmed a working kernel for that kind of operating system, a "minimum" set of drivers to make it run on your pc and that you are ready to share it with the entire world. Well, even in these ideal conditions, the main problem is with the companies that currently are using Windows or Linux and that should invest money to drop their operating systems / applications to use your operating system, the same for the hardware vendors that should write the specific drivers, the software houses, curstomers, professionals, video gamers and so on. Today there are so many hardware devices that is almost impossible to achieve the same performance that already existing and most proven operating systems have achievied in so many years of existence. It's not a matter of programming skills, it's a matter of "temporal gap". Even if you are so good to achieve the perfection on a single machine, you cannot be able to obtain the same stability on the wide range of existing personal computers, tablets, smart phones, sbc and all the devices mounting all the existing peripherals, because you won't have the money, credibility, reputation, experience, employees, followers, curtomers to do it. The situation in the past was sligtly different, Windows was created to run mainly on x86 family of processors, but there were other operating systems (like Amiga OS) that were projected to run on 680x0 family of processors, so the idea of operating system was more embedded to the small set of hardware that the vendors had to sell. Today it's totally different. If you want to create a valid operating system, you have to cover all the existing hardware produced at least in the past 20 years, or even if your main target is a single device, you cannot surpass the existing operating systems because they are already optimized to work better on the same device in terms of performance and power consumption. In conclusion, if you are having the crazy idea of creating your own operating system, just forget it because you are wasting your time and the opportunity to produce something really useful. You will never produce even an ounce of what is required today to run a modern application on modern hardware, with the same degree of portability and support in terms of graphics / audio / peripherals, and even if you do it, there are already more stable operating systems that are doing the same thing exactly when you are having the bad idea of doing it.
I want to create this post to clarify once and for all how the OpenGL extensions mechanism works and the correct proceedings to target OpenGL versions. I named this article in this way because OpenGL are generally bad documented (or difficult to understand) and OpenGL.org wiki makes the things worse. For example, several people got confused by this page:
" Targeting OpenGL 2.1
These are useful extensions when targeting GL 2.1 hardware. Note that many of the above extensions are also available, if the hardware is still being supported. These represent non-hardware extensions introduced after 2.1, or hardware features not exposed by 2.1's API. Most 2.1 hardware that is still being supported by its maker will provide these, given recent drivers.
- Most of the previous list
- GL_ARB_map_buffer_range [...]"
And this document:
"New Procedures and Functions
void *MapBufferRange( enum target, intptr offset, sizeiptr length,
bitfield access );
void FlushMappedBufferRange( enum target, intptr offset, sizeiptr length );
(1) Why don't the new tokens and entry points in this extension have
"ARB" suffixes like other ARB extensions?
RESOLVED: Unlike a normal ARB extension, this is a strict subset of functionality already approved in OpenGL 3.0. This extension exists only to support that functionality on older hardware that cannot implement a full OpenGL 3.0 driver. Since there are no possible behavior changes between the ARB extension and core features, source code compatibility is improved by not using suffixes on the extension."
so the question is:
- GL_ARB_map_buffer_range is a core extension or not?
In the previous article I emphasized the importance of not having a third-party loading library like glew because OpenGL is too complex and unpredictible. For example, if you want to implement a videogame with an average graphics and a large audience of users, probably OpenGL 2.1 is enough. At this point, you may need to load only that part of the library and make the right check of the extensions or just use the functions that have been promoted to the core of the current version. Remember that an extension is not guaranteed to be present on that version of OpenGL if it's not a core feature and this kind of extensions has been introduced after 3.0 to maintain the forward compatibility.
For instance, it's useful to check the extension GL_ARB_vertex_buffer_object only on OpenGL 1.4 (in that case you may want to use glBindBufferARB instead of glBindBuffer) but not on superior versions because it has been promoted to the core from the version 1.5 onward. The same applies to other versions of the core and extensions. If you target OpenGL 2.1, you have to be sure that the extensions tipically used by 2.1 applications have not been promoted to the latest OpenGL 4.5 version and to check the extenions on previous versions of the library, making sure to use the appropriate vendor prefix, like ARB. Even if with glew you can make this kind of check before using the loaded functions, I don't recommend it because glewInit() is going to load also parts that you don't want to use and you run the risk to understimate the importance of checking the capabilities.
Anyway, reading the OpenGL spec and add manually the required extensions is a time expensive job that you may don't have the time to do. Recently, the Khronos group has released an xml file where there is a detailed description of the extensions and the functions for every version of the library, it is also used to generate the gl.h and the glext.h header files with a script in Python. In the same way, you can program a script that parses the gl.xml file to generate your own loading library, making the appropriate check of the extensions and including only the part that you really need to load on your project. You can find the gl.xml file here:
OpenGL is not so easy to use. The API exposes thousand of functions that are grouped into extensions and core features that you have to check for every single display driver release or the 3D application may not work. Since OpenGL is a graphics library used to program cool gfx effects without a serious knowledge of the underlying display driver, a large range of developers is tempted to use it regardless of the technical problems. For example, the functions are loaded "automagically" by an external loading library (like glew) and they are used to produce the desired effect, pretending that they are available everywhere. Of course this is totally wrong because OpenGL is scattered into dozens of extensions and core features that are linked to the "target" version that you want to support. Loading libraries like glew are dangerous because they try to load all the available OpenGL functions implemented by the display driver without making a proper check, giving you the illusion that the problem doesn't exist. The main problem with this approach is that you cannot develop a good OpenGL application without taking the following decision:
- How much OpenGL versions and extensions I have to support?
From this choice you can define the graphics aspect of the application and how to scale it to support a large range of display drivers, including the physical hardware and the driver supported by the virtual machines. For example, VirtualBox with guest addictions uses chromium 1.9 that comes with OpenGL 2.1 and GLSL 1.20, so your application won't start if you programmed it using OpenGL 4.5, or even worse you won't start also on graphics cards that support maximum the version 4.4 (that is very recent). For this reason, it's necessary to have a full awareness of the OpenGL scalability principles that must be applied to start on most of the available graphics cards, reducing or improving the graphics quality on the base of the available version that you decided to target. With this level of awareness, you will realize that you don't need any kind of loading library to use OpenGL, but only a good check of the available features, that you can program by yourself. Moreover, libraries like glew are the worst because they are implemented to replace the official gl.h and glext.h header files with a custom version anchored to the OpenGL version supported by that particular glew version.