Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is not how CS works and read about klarna's transition to AI CS and see how…
ytc_Ugx-z2Jn9…
G
All BS. All specialities will use AI as a tool in conjunction with the patient. …
ytc_UgzaYaRjp…
G
Yea i dont think putting prompts into a machine is very "creative" and you can'…
ytc_UgxjhBQW5…
G
@noogler7949exactly and in countries with so much population the effects will …
ytr_UgwCaxoDL…
G
Imagine, AI starts to blackmail people and suddenly everyone has a conscience an…
ytc_UgzSDnKPG…
G
Logically speaking if AI were to become self ware or alive to some degree, it wo…
ytc_Ugz8KCgIj…
G
We could abandon capitalism. Rethink how to develop a society. In feudal times, …
ytc_UgxmxdCZl…
G
There are several differences. Firstly, its disclosing your sources. If you make…
ytr_UgwVd9Fmj…
Comment
02:27 “Without pain or pleasure, there's no preference, and rights are meaningless.”
A key reason for having AI will surely be to automate stuff, which will require the exercise of preferences. For the ability to make effective evidence-based decisions, AI might require some sort of perception; and to make value-based decisions (i.e. aligned to social norms), AI might need to have some sort of regard for others. There might be a case for giving some rights for something that has perception, preferences, and a “social consciousness”. But the real difficulty might be where to draw the line – not all machines will have the same intelligence/complexity. If a toaster has no rights, but a highly advanced AI does, then what about all the machines in between – where do rights begin?
youtube
AI Moral Status
2020-08-01T13:4…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxWl51xd66j3p-3hs54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5ZPH15izuvOSYYYB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"sadness"},
{"id":"ytc_Ugw2v4yU19slb-zXDqt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxULQRXtDBc_CnGqNJ4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxIMp0mytvW_5Xnr-V4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxzT2gFdtbIGZVs4xF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyDLUjEeYcUG5ildLh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxV3EVVQP7Ej0nA0Np4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxvv_uYMZbCQIdNG7h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugys1h8J4WxOiIjihHd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]