What will it mean for robots with AI (artificial intelligence) to become conscious, have emotions, and gain social responsibilities?
Will they become moral subjects, with ethical obligations and rights?
It seems to me that ethics can be derived from consideration of long term self interest, and can be thought of as a set of risk mitigation strategies.
It seems to me that it is in the long term self interest of all sapient entities to respect and support all other sapient entities. Our species hasn’t quite worked that out yet, we are still for the most part trapped in outdated scarcity based modes of valuation (markets), that served us well during most of our cultural evolution, but cannot deal with the abundance now technically available.
We need to start demonstrating by our actions that we value all sapient life, before we bring non-organic sapience to awareness. If we don’t we will appear as a threat, and that is not a healthy way to appear.