Direct Answer
I see that other answer going into technical details, but I don't know if that hits the spot for answering "What are even normals" - so here's a plain English answer for contrast:
A normal is the direction that a surface is facing.
Why does it not just use both sides?
A 3D model consists of points in 3D space (= "vertices"), 3 or more of which can be combined to form a surface that we can see (= "face").
In reality, if you have a surface-like-thing, like a sheet of paper, you can see it from both sides. But for the faces in 3D rendering, that would be a waste. Consider a 3D model of, let's say, a person: The faces on the back side, we only see from the back - because if we look at that model from the front, they are blocked by the faces that form that person's front side.
A normal allows us to easily identify those we don't need to render: Calculate the angle between the direction of your "camera" and a normal, and if it's smaller than 90°, you can skip it while rendering. Compared to other stuff, that angle calculation is such simple, fast math that we just saved roundabout half the effort - or in other words, doubled our performance.
Use Case: Glass Shader
Consider a sheet of glass.
Glass has a certain visual effect (in real life) that I was never really aware of, until someone explicitly told me: If you look at that sheet "head-on", it's transparent. The more angled of a viewpoint you take, the less transparent it becomes - for example, becoming more and more green.
I'm spelling it out for completeness' sake, because you probably already see where this is going: A glass shader usually gradually replaces transparency with some green-ish or blue-ish colour, the smaller the angle between the camera and the glass normal gets.
Use Case: Normal Map
Thing of the grooves in a brick wall: If you just make it a box with a texture, it will look very, well, flat. If you create a little 3D cube for every brick in there, the rendering engine will have to do a lot of extra work to calculate all that geometry, for something that's literally just a wall.
A "normal map" takes the x,y,z parts of a normal direction, and stores them in the r,g,b data of a pixel. That way, you get an "image" that looks kinda ugly 😁, but where every pixel stores information about at which "angle" that part of the texture is:
A normal map can give it the illusion of it being 3D, because of the way the shadows move around the bricks when the light source moves. On the other hand, computationally it's extremely cheap: The renderer needs to draw every pixel anyway, it just uses the pixel-specific normal instead of the "global" normal for that surface - and the geometry is nothing more than a box.
The illusion of normal maps doesn't work at extreme angles, because then it becomes obvious that the thing is flat in 3D space. But for things like e.g. the floor, which you usually don't see that way, it adds a lot for little cost.
Summary
There are other, and much more advanced things that people do with normals. However, those can fill whole books. In the same vein, some things in this answer are a bit simplified.
As a general summary, I would say rendering is simulating how light bounces off of things and then reaches our virtual camera, and it is very helpful to know which way something is facing, when we want to simulate light reflecting off of it.