I started learning game development by accidentally finding a free version of game maker 6.1 on a CD when I was a high school student 17 years ago. I was studying computer programming and despite being a gamer it somehow never came to my mind that I can make games instead of software before that. But when I saw that software, everything changed for me.
How did I came to choose Unity
After game maker in my quest to learn more advanced stuff I found 3d game studio and some other very weird engines like quest 3d and used game studio for a while. The loop was code, compile, run and then test and later on I wanted to get serious and make money from games instead of making free ones and as a result of a great amount of talks with a friend, we decided to make a multiplayer harvest moon for facebook. There was a new engine which ran 3d on the web and was using PhysX instead of ODE and its programming language was something general which meant even if I did not succeed in game development, I could use it for other types of programming or at least I thought so back then. Bought a C# book and started learning unity. Unity was a small engine which its devs seemed to make all decisions right. It was around 2009 and after a few emails with their CEO, I found out that their beta version for windows will be out in a month or so.
Long story short I developed games mostly as a contract developer and worked on middleware since then until today but occasionally checked other engines and technologies. I did my server programming mostly in Microsoft Orleans and ASP.NET core and did not go back to C/C++ after university much just because the better engines for making games quickly were not using them. That at least was true about jobs and places which I could get work from. I checked UDK (Unreal Engine 3) a few times and before settling on unity checked multiple MMO engines for that harvest moon game which we never finished. All of them were hard to configure and use and were very expensive. Monumental, big world and hero engine were among them. Doing this and reading a lot on game engines and game engine architecture, I had some good theoretical knowledge and overview of the different ways of doing things and never was a unity fan boy or anything like that.
Why did I start to learn Unreal
When UE4 came out I was deep in projects and also thought that Unity will revolutionize the industry with DOTS and Mike Acton at the helm but looked at it since why not. I like looking into things. I looked into Erlang, Rust, Go and worked a bit with some of them and even shipped fixes for software written in some of them. I looked into how Unreal does networking which is the main thing I do and in other places which I got curious about over time but never thought much of switching due to the games we were making and the optimism mentioned above. Our pre-productions and prototypes failed and DOTS never materialized so 2 years ago seeing that and other signs from the changes after IPO in unity and advancements in Unreal and the way unity was prioritizing its business, I decided to learn unreal but never got to it until last summer and since then I started to learn it on and off. I also started working on lower level stuff again and watch the very helpful courses from Casey Muratori at https://computerenhance.com and have a plan to watch some of his hand made hero stuff. I never was a too high level programmer and knew assembly and was the guy which did the optimizations in projects. I loved Unity’s burst compiler and the new features but it was time to move.
I made a Utility AI tool for both unreal and unity since then with the help of other devs and now have a good command of Unreal’s main features and how to look into stuff I need. I never spent more than 10 hours a week on it and many weeks have been 0 but despite that I’ve advanced well enough. There are a few things that if you know and if you know your C++ or can remember it well, you’ll be totally fine and will have lots of additional features at your fingertips. Unreal is huge and I mean HUGE but engine features are well integrated with each other and you don’t have to write many things from scratch. Many features are already provided at a very high level and in a way which is useful for most games. Old features are always kept working and things are more professional and production ready overall. You need to keep in mind that you’ll not be as proficient with Unreal in 6 months but after you learn the main differences, then you’ll not have a hard time almost at all.
What things are different and how
There is a documentation page which is helpful. Other than the things stated there, you need to know that:
Actors are the only classes that you can put in a scene/level in Unreal and they cannot have a parent/child relationship to each other. Some components like the UStaticMesh component can have other actors as their children and you can move actors with each other in code but in general the level is a flat set of actors.
The references to other actors that you can set in the details panel (inspector) are always to actors and not to specific components they have. In unity you sometimes declare a public rigidbody and then drag a GameObject to it which has a rigidbody but in UE you need to declare the reference as an Actor* pointer and then use FindComponent to find the component.
Speaking of Rigidbody, UE doesn’t have such a component and the colliders have a Simulate boolean which you can check if you want physics simulation to control them.
UE doesn’t have a FixedUpdate like callback but ticks can happen in different groups and physics simulation is one of them.
You create prefab like objects in UE by deriving a blueprint from an Actor or Actor derived class. Then you can add components to it in the blueprint and set values of public variables which you declared to be visible and editable in the details panel.
In C++ you create the components of a class in the constructor and like unity deserialization happens after the constructor is called and the field/variable values are set after that so you should write your game logic in BeginPlay and not the constructor.
There is a concept which is a bit confusing at first called CDO (class default object). These are the first/main instance created from your C++ class which then unreal uses to create copies of your class in a level. Yes unreal allows you to drag a C++ class to the level if it is derived from Actor. The way it works is that the constructor runs for a CDO and a variable which I think was called IsTemplate is set to true for it. Then the created copy of the object is serialized with the UObject system of UE and can be copied to levels or be used for knowing the initial values of the class when you derive a blueprint from it. If you change the values in the constructor, the CDO and all other objects which did not change their values for those variables, will use the new value. Come back to this later if you don’t understand it now.
The physics engine is no longer physX and is a one Epic themselves wrote called Chaos.
Traces/raycasts don’t have layers but instead use object types and channels.
The input system is more like the new input system package but much better. Specially the enhanced input system one is very nice and allows you to simplify your input code a lot.
Editor scripting is documented even worse than the already not good documentation but this video is helpful.
Slate is the editor UI framework and it is something between declarative and immediate GUIs. It is declarative but it uses events so it is not like OnGUI which was fully immediate, however it can be easily modified at runtime and is declared using C++ macros.
Speaking of C++, You need to buy either Visual Assist which I use or Rider/Resharper if you want to have a decent intellisense experience. I don’t care about most other features which resharper provides and in fact actively dislike them but it offers some things which you might want/need.
The animation system has much more features than unity’s and is much bigger but the initial experience is not too different from unity’s animators and their blend trees and state machines. Since I generally don’t do much in these areas, I will not talk much about it.
The networking features are built-in to the engine like all games are by default networked in the sense that SpawnActor automatically spawns an actor spawned on the server in all clients too. The only thing you need to do is to check the replicated box of the actor/set it to true in the constructor. You can easily add synced/replicated variables and RPCs and the default character is already networked.
There is a replication graph system which helps you manage lots of objects without using too much CPU for interest management and it is good. Good enough that it is used in FN.
Networking will automatically give you replay as well which is a feature of the well integrated serialization, networking and replay systems.
Many things which you had to code manually in unity are automatic here. Do you want to use different texture sizes for different platforms/device characteristics? just adjust the settings and boom it is done. Levels are automatically saved in a way that assets will be loaded the fastest for the usual path of players.
Lots of great middleware from RAD game tools are integrated which help with network compression and video and other things.
The source code is available and you have to consult it to learn how some things work and you can modify it, profile it and when crashed, analyze it to see what is going on which is a huge win even if it feels scary at first for some.
Blueprints are not mandatory but are really the best visual scripting system I’ve seen because they allow you to use the same API as C++ classes and they allow non-programmers to modify the game logic in places they need to. When coding UI behaviors and animations, you have to use them a bit but not much but they are not that bad really.
There are two types of blueprints, one which is data only and is like prefabs in unity. They are derived from an actor class or a child of Actor and just change the values for variables and don’t contain any additional logic. The other type contains logic on top of what C++ provides in the parent class. You should use the data only ones in place of prefabs.
The UMG ui system is more like unity UI which is based on gameobjects and it uses a special designer window and blueprint logic. It has many features like localization and MVVM built-in.
The material system is more advanced and all materials are a node graph and you don’t start with an already made shader to change values like unity’s materials. It is like using the shader graph for all materials all the time.
Learn the gameplay framework and try to use it. Btw you don’t need to learn all C++ features to start using UE but the more you know the better.
Delegates have many types and are a bit harder than unity’s to understand at first but you don’t need them day 1. You need to define the delegate type using a macro usually outside a class definition and all delegates are not compatible with all situations. Some work with the editor scripts and some need UObjects.
Speaking of UObjects: classes deriving from UObject are serializable, sendable over the network and are subject to garbage collection. The garbage collection happens once each 30 or 60 seconds and scans the graph of objects for objects with no references. References to deleted actors are automatically set to nullptr but it doesn’t happen for all other objects. Unreal’s docs on reflection, garbage collection and serialization are sparse so if you don’t know what these things are, you might want to read up on them elsewhere but you don’t have to do so.
The build system is more involved and already contains a good automation tool called UAT. Building is called packaging in Unreal and it happens in the background. UE cooks (converts the assets to the native format of the target platform) the content and compiles the code and creates the level files and puts them in a directory for you to run.
You can use all industry standard profilers and the built-in one doesn’t give you the lowest level C++ profiling but reports how much time sub-systems use. You can use it by adding some macros to your code as well.
There are multiple tools which help you in debugging: Gameplay debugger helps you see what is going on with an actor at runtime and Visual Logger capture the state of all supported actors and components and saves them and you can open it and check everything frame by frame. This is separate from your standard C++ debuggers which are always available.
I hope the list and my experience is helpful.
Hi Ashkan, nice write up!
I will probably port a game of mine from Unity to Unreal as a learning excercise to get to know Unreal deeper (and hopefully solve all of the associated issues I've been having with garbage collection latencies in the game). Your post is a good starting point, especially since we have a similar background as you know. ;-)
Petter
Thanks for writing this up. I might have to consult this again should I decide to make the switch. For now, I'm hoping Unity will backpeddle enough that I can justify staying.