Strong AI seems achievable, but it will take awhile.
The main barrier at the moment is a deep understanding of "structural learning", particularly as it relates to modeling the world and guiding action.
Machine learning has gotten quite good at "parameter learning". When a problem has been modeled by human designers, machine learning can fine tune the parameters. But how is the model created in the first place? This is "structural learning": inferring the optimal structure of the model, and changing the model when evidence disagrees with it (e.g. conceptual "paradigm shifts").
For example, what are the basic components for modeling the physical environment? Familiar objects include things like people, trees, cars, and chairs, but how does an autonomous visual system figure that out on its own without a human or program predetermining it? These are basic objects, but the human brain is able to learn the very idea that objects exist in the first place. Structural learning applies to action as well, since interaction with the environment must be built out of malleable elements that can morph, merge, and divide as a side-effect of engaging with the world or "thinking".
An interesting question is whether computer algorithms will solve this problem, or whether we'll just have to reverse engineer the brain and invent an entirely new computing paradigm based on what we find out.
Related
How plausible is the theory that human intelligence stems from a single algorithm?
In which field (neuroscience, AI, etc.) will consciousness be understood first?
It's achievable, but it's very very difficult. There are many processes involved in intelligence, and some rather complex and tough engines, and a great deal of complexity. This is not trivial to construct.
However, strong intelligence has been achieved in nature and it can be achieved in machines eventually too. Note that humans are NOT rigorously precise thinking machines - we make mistakes, a lot of them - and yet with the imperfect mechanisms with which we are equipped, we can do very well. Which means that we do not have to develop the perfect AI in order to have something useful and very functional. We just have to make a system that tries hard, can improve itself, and can come to recognize when it makes mistakes or does not do well. This as a goal is not impossible. Just very tough to put together. I also note humans deal with uncertainty, imprecision, and a universe in which nothing is completely predictable (thanks to quantum mechanics teaching us that fact! and thank you Heisenberg for that reassurance),
So the answer is, strong AI is not impossible. Because if it were impossible, humans would not walking around right now pretending they are something great. They'd still be swinging from trees.