Some good answers here already.

We cannot “see” what is happening at that level.

We have to make machines that do strange things then we look for little regularities or irregularities in their strange behavior and we look for mathematical equations that explain those. Then we explore the mathematics related to those equations and design experiments to test out which of those implications seem more likely than others.

After many years of doing that, we have some strange properties that have mathematical expressions that seem to be very reliable in most contexts we have looked at in detail.

Two of those in particular seem interesting to me in respect of this question.

One is the Planck constant, which is the smallest amount of energy that seems able to exist in this universe.

It is usually expressed as a relationship with an oscillating property that most people conceptualize as a “wave”, but is more accurately characterized as a relationship between space and time. The electromagnetic properties associated with this are orthogonal. That means that if you look at one of them, it comes and goes, like a wave passing by, but if you look at both of them, then as one is at a minimum, the other is at a maximum. That kind of makes sense, but it gets weirder.

The other property of interest is Heisenberg uncertainty, which in one formulation states that you cannot know both the momentum and the position of anything beyond a certain limit.

When you combine these two properties, you get a spatial relationship expressed as a set of probabilities, and for the energies that electrons normally have, there is a region of space in the center of the “atom” that they are not allowed to occupy (because if they existed in there with the energy value that we know them to have, then we would know both their position and momentum to a greater accuracy than we are allowed to, so they cannot go there).

That is a weird way to think about things, and it does seem to be an accurate way to think about them, if you just take the information principles embodied in the equations that make up quantum mechanics, which as others have noted, is a system of equations that work very well, and have been subjected to very close inspection by experiment over the last 100 years.

It is a very different way of thinking about things from the classical notion of truth, and because of that, many minds simply reject it.

It is a way of thinking that is fundamentally uncertain, and fundamentally probabilistic, both at the same time.

The really strange thing about it, is that it works at this level of the very tiny, and when you add together large collections of these tiny ”things” (though as others have noted, “thing” isn’t a particularly useful idea at the scale of the very small) over large collections of their time units then they very closely approximate classical causality.

So something that is fundamentally uncertain within certain limits can in large collections appear like something lawful.

And the smallest thing an unaided human eye can see seems to be more than 10^16 of those collections of fundamental properties, and the smallest time a human can experience is a collection of more than 10^40 of their time units. So when you get collections of probability distributions with more than 10^50 instances, they form very reliable and predictable distributions most of the time.

Hence the illusion of causality we are used to works in practice, most of the time.

Just like the idea that the earth is flat is good enough if you never leave the valley of your birth, and you never build anything more complex than a house made of lumber (carpenters still use the idea that the earth is flat today, at that scale it is a useful approximation).

All ideas, all knowledge, seems to have this property of being a useful approximation to something at some scale (even quantum mechanics {and that is an idea that some mathematicians and physicists have a lot of difficulty with}).