What if you could have AGIs that experience qualia of their own?

It would be difficult to prove that they do but if you believe in mind uploads even as a theoretical possibility it is not a far stretch to say an artificial mind could experience qualia too if an uploaded human mind can.

What if furthermore AGI minds that experience qualia are more efficient than human minds?

For the sake of the argument let’s say you can run two AGI minds for the energy needs of simulating one uploaded human brain. To make this even more interesting from a philosophical viewpoint, you would not even need to kill humans just to put the human mind simulation in long-term storage running it very seldom let’s say 1 day every 10.000 years or just a factor of 0.0000001 the speed of real time.

Why would it be unethical from a utilitarian perspective to do so?

How much slower could you run a human simulation before it becomes unethical?

In what ethical system could you argue for choosing humans over AGIs if they experience similar levels of happiness/utility?

What if the AGI or an ASI can “persuade” the human to voluntarily run slower?

The simplest solution to all this is postulating that existence is suffering and that creating minds that experience qualia is unethical in the first place.