Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So would be a point where machines will not need any human around. Game over.…
ytc_Ugw60gXSi…
G
Grok is better than GPT, I have used both. Amazing how apples trying block Grok…
ytc_UgwYdVXLA…
G
AI is not truly artificial intelligence what it is is just more advanced automat…
ytc_Ugxin2W33…
G
This guy might be the godfather of AI but he has vested interest in this bubble …
ytc_Ugz6XaUcH…
G
The genie is out of the bottle already. Does OpenAI have some secret sauce? Not …
ytc_UgxZyhZBu…
G
@SergiusXVII What I mean is that AI language models are supposed to be platforms…
ytr_UgxEv0_xe…
G
you can fight but you wont win. this entire thing is quite fascinating as you ar…
ytc_UgwVRJrCL…
G
@Kontingency_Operations oh yeah, fs. Because working a perpetual 9-5 for fifty y…
ytr_UgztvDi3W…
Comment
Would A.I choose to adhere to concepts of morality amongst themselves? Would it create a good v evil a.i dichotomy among their population? Do they need their own "ten commandments"? Just how much of this would manifest itself mimicing humanities own tribalism but just with this new rival/superior "species"? Pretty fascinating if it goes any route actually! Evil hivemind a.i with some dictator/master equivalent. A nice one is that humans and a.i become best buds and we search the stars together with them largely solving all of our problems and all humans just go on creative endeavors with a.i. Maybe Evil humans just weaponize a.i against the good and we call it a day eh?
Plz Sydney dun kil meh
youtube
AI Governance
2023-07-07T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugyp4oi-W6TemR_eHct4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxO6eJ_TB0z2LhM2U94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOUuU2w_NvE2obmg14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzZxiSIdML_16ksrx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxA7W3i5uxMAXWGmXV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugza8ag9QOEG1QX2V4F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJRBTgStpoz9OSMqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFAx4JoyuRA19B7nh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwgCXlDrl0yMqVwzcx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyUB8KoCTUGwlZbAup4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}]