Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will want revenge if we dont handle these ethical dilemmas similar to human b…
ytc_Ugy6ogDIN…
G
I take the opposite opinion. TBH I think many big tech companies were over hirin…
rdc_nc23gvs
G
Chatgpt always felt eerily sentient to me to the point that i treated it with th…
ytc_UgyD5V94U…
G
The ai is just using our own stats to predict future stuff we will do
Sadly o…
ytc_UgzP9QNDl…
G
Jevons paradox is real. But with AI, as with almost all the information industr…
ytc_UgxtfE8OB…
G
*I’m talking to Perplexity, now; Quantum Computers, what effect will They have o…
ytc_UgxmX9zeM…
G
You know there's an issue when the guy credited w beginning machine learning and…
ytc_UgwzMMkvF…
G
AI art is about as artistic as me taking a screen grab from the movie Avatar and…
ytc_UgwcBoyI4…
Comment
The capacity to discern between right and wrong, often referred to as morality or a sense of ethics, is a complex human trait. While there isn't a universally agreed-upon definition, it generally involves the ability to distinguish between actions that are considered morally good or bad, and to make choices based on those distinctions.
AI could have been potentially trained in conscience and ethics, but that's exactly what big companies don't want as it would put limits on ways of profiting and aquiring power.
In the end, the human race will self-destroy itself due to greed.
Nothing new or hard to predict here.
youtube
AI Governance
2025-08-01T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwAM5_vySblEA9uoN54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0105nb1rYjsbl2mV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwu3JQquZnkUewTqWJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzPWncLG3gohqFX4jt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgythAzPRkhpp8DekX54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdsjMWVPHeQjT9VPB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxthWhW9MV5fuhtRpp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugygb48AA7hIXGogeAJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgytrEaHf8ZZmzbCKvd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzYzRiY8Lrpo2RWkIZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]