Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai feast on human art for generating content -> human feast on ai generated cont…
ytc_UgyjONGuj…
G
ChatGpt is not scary, humans are! Just as a hammer isn't scary but a insane huma…
ytc_UgwgYcgQV…
G
Like I say AI are their to get rid of humans,it is something the law and governm…
ytc_UgxUZq5GI…
G
7:30 why would it be impossible to machines to have an 'actual understanding'?
J…
ytc_UgggkMex7…
G
I was surprised to hear someone say that a book about AI that was written 30 yea…
ytc_UgyfEdGot…
G
This video won't age well for LEX, a year later we have agents swarmed into PCs …
ytc_Ugx3fu5m5…
G
This robot of the M.I.T. COG lineage expressly said humans (plural) GASP at l…
ytc_Ugxo5IdWY…
G
5:05 Hmmm...sounds like we should get rid of AI then while we still have the cha…
ytc_Ugw8FmE8x…
Comment
Solution :
Hardwire into the core program the 3 laws of robotics
1) a robot may not harm any human being or through inaction allow a human being to come to harm.
2 ) a robot must follow orders given by a human being as long as the order doesn't conflict with the first law .
3 ) a robot must protect its own existence as long as the protection
Doesn't conflict with the first and second laws .
Added laws :
4) a robot must always be transparent ( tell the truth)
5) a robot must always co-operate with human beings
As long as the co-operation doesn't conflict with the first second and. Third laws.
6) a robot must co-existence with human beings in peace and harmony.
youtube
Cross-Cultural
2025-10-08T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyx_FU0g8pCisWYp7l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOYaOmz5kYukmSre94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKcIKpFAAMP2eplpd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyjj9DLHWwbQdf9HIN4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzXNYI3oUZJDGG1yJF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxN5RIOHbHgQHhG75l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyK7N25Q64e_lvy2I14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyS5ce8XRAKF0EqM2Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVjgkGC6PI7PH21ph4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGhR1a4OimuHTaDft4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]