Short and obvious answer is yes – beyond any shadow of reasonable doubt.
The long answer is beyond the scope of a post like this, but if you look into the work of people who have made serious advances in the understanding of computation and algorithms and the human brain then you may begin to get a sketch of an understanding of the complexity embodied in the sort of awareness that we have, and what it will take to replicate and exceed it.
Some useful people to read are von Neumann, Turing, Minsky, Wolfram, Darwin, Dawkins, Yudkowsky, Snowden, are good starting places, and there are hundreds of other good books out there well worth reading. And such reading needs to be complemented with experience. Spend a few hundred lab hours working with neurons, a few thousand hours in nature studying plant and animal interactions, a few thousand hours programming computers, a few thousand hours in politics (looking from a systems perspective to optimise outcomes), spend time building and repairing physical things (houses to radios to computers) then start to get a feel for complex systems and their interaction, then spend some serious time on the theory of search and search across infinite domain spaces; and the sorts of problems present (the many classes of halting problems) and you will begin to gain some appreciation of the complexity of the task, and the progress that has been made.
We are seriously complex.
What we can do is amazing.
The sorts of traps that have most of us in their grip need to be appreciated.
The sorts of dangers that exist also need to be appreciated – they are many and serious.
There will always be a tension between the hard won lessons of the past, and the untested possibilities available to the future. That is the nature of our existence. Finding a balance that is survivable is the art of life.
The complexity of the systems embodied within us that have allowed us to do that thus far ought not to be underestimated.
The fundamental role of cooperation in the operation of those systems must be appreciated if we are to survive. In this context, competitive systems are essentially cancerous and terminal to system survival (terminal to us as human beings). So we have some serious changes in social and economic systems urgently required (and that will be difficult for many on the conservative end of the spectrum to appreciate).
So yes – possible, and dangerous, and doable, and essential if we are to survive very long as a species. The sorts of problems we face need that sort of intelligence to solve them. It has dangers, but the dangers of a future without it seem to me to be orders of magnitude greater. In a globally cooperative context that values diversity and independence, a high probability of very positive outcomes; in a context dominated by competitive markets and nationalism a very low probability of survival (and that probability doesn’t seem to alter much with or without creative AI).