Artificial Intelligence is a mirror of human biases. How can AI and our brains become less biased quickly?

What data should we collect that has not been collected to make AI more ethical?

If every organization has its own corporate guidelines wouldn’t that result in thier own interpretation of what is ethical?

Can AI determine what is ethical vs. moral? How does profit-seeking attitude affect this?

If everyone is responsible for data, then no one is. Should we have a central understanding of what data actually means for every department?