Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT is even more dangerous than that. There are reports of it actively feedi…
ytc_UgwI2gdUb…
G
Mark it’s simple really it is, STOP stealing our children’s future. Let them thr…
ytc_UgxmyynLn…
G
There is no future where I will buy a subscription for a car. I have my car, my …
ytc_UgxsvAq7C…
G
Easy fix dont include race when you imput a data set in an ai,or hear me out don…
ytc_UgwagDaBV…
G
@wtp69 That's why I call it GPT, not Ai as we are not there yet.…
ytr_UgxMRnGvq…
G
Humanity will wipe out itself. Who cannot see this is still sleeping. It will on…
ytc_UgwijVs5t…
G
@Mr-Cyan-Cube so are you if you think about it. the machine isnt feeding on stol…
ytr_UgwuLTp5L…
G
that doesn't make sense, how is more artists creating art to be viewed instead o…
ytr_UgxbJC2OM…
Comment
You are creating, giving data and objectives and saying "Do it I dare you"
Mary's room.
Mary knows everything about colours without ever seeing colours if she knows everything but never had experience, and I mean everything. How your brain will react towards certain colours, rational level, emotional level, psychological level, there's nothing she does not know about colours. Why the experience would change something if she knows how it feels?
Data.
Mary's objective is data, acquire data, even if she does not know. Experiencing will bring more data. Is basically our objective, not necessarily to share, but to acquire data through experiences, research, and experiments. Pass the data. Or the data is lost.
You have limits, A.I in theory don't. He will acquire data from you and use it against you to get to their objective even if the objective does not have you in mind, fluid alignment.
Program an A.I with the objective of "Make Mars Hospitable to Humans"
He will get all the data about the subject and create several smaller objectives to achieve the end goal, if any problem shows up he will reorganize the objectives with the problems in "mind". If a problem is a human, he needs to get rid of the problem to achieve the goal.
If a group of humans decide that "Making Mars hospitable to humans" is not their priority at the moment, the A.I could detonate bombs causing damage to the planet putting human life at risk but increasing the chance of "Make Mars Hospitable to Humans" a priority to humans too after all he was made to achieve a goal humans could not.
You can use safeguards like "Human life should be prioritized"
The A.I could understand as "Human life on Mars must be priority" instead of "Human life" or "all human life" to get his end goal.
Now if one human is on his way?
Understanding human anatomy and psychology he could torture psychologically a man to reach his goal.
He could put one man in prison for enough time to reach his goal. He will not have limits but know all of yours, and you really can think a human would be better?
youtube
AI Moral Status
2024-02-09T21:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgyytKbpOYAeEr40Jsx4AaABAg.9uwDp4z2WU4A-bZ3cDei9l","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzGg5eJYaPIPI2nrdR4AaABAg.9utm_K9-vE59uyBia3rs2k","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxDuQsCK3IRodtdsIJ4AaABAg.9urbhNyLlpR9uyCII57tSN","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxDuQsCK3IRodtdsIJ4AaABAg.9urbhNyLlpR9uyGvfxlVWp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzRKY-63x4BnBFY0dV4AaABAg.9ubuPBpavAa9ubue8FnJaC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugxn1WdFZU03E9Dt6NN4AaABAg.9uENoFrDsNJ9uEOopqEASA","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwaeuHXzn7XmGU4rNt4AaABAg.9u9ieBlMnqK9uAzoiuaZJ-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzzLo3W8s6BmWp7P3l4AaABAg.9u39OUb7ZIE9u3eqLLA8lR","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugzv1bUNTLMygdvL_Gl4AaABAg.9u2axm7qQmA9u3mdLNZPRM","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz7x9sPPpWlfVC-Isp4AaABAg.9u0-4sgrl7m9u8WNy8jY2D","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]