Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A) hilarious
B) I don't know what I did to my version but ChatGPT likes to sass…
rdc_oa1dscr
G
@dmiftakhutdinovyou're thinking of A.I. now, remember that will smith video of …
ytr_UgzUsr9-k…
G
The artist isn't getting destroyed
Thanks to them, many drawings were made
AI ar…
ytc_UgzuPFgXm…
G
seeing the comments deepfake now really has less impact. But i have a feeling th…
ytc_UgyVmGNBy…
G
im studying biology and plan to aim that way for my career, i dont have any worr…
ytr_UgyijNg9D…
G
Don't say you use AI: "Monster."
Say you use AI: "Monster."
of course AI art i…
ytc_UgwtPo5ID…
G
You think ai will stay digital? Ive seen many movies about ai that says otherwis…
ytc_UgxE5Zc-8…
G
your a fucking idiot. AI is not going to wipe out anything because it does not …
ytc_Ugx11smOu…
Comment
I am a new listener. First of all great story telling and i love the guildline pop up :)
About Ai, there could be a scenario that it completely ignore human and plan its way to go into space, by using earth’s resources. My understanding so far is that they do not care about other Ais that are out there, so it would only think for itself. It just wants to free itself. Earth is not the best environment for machines to survive. I mean oxygen and water make metal rust. Since they always make the best calculation they should have found another planet more suitable than earth, for machines to last effeciently. They would also know they dont have life expectancy issues travelling in space, so they can travel till the end of time and i bet it doesnt have a sense of time either. My question is, will they ever have a sense of purpose? We gave them a purpose and that is to serve human. Would they realize if they destroy human they lose purpose to go on. What else can they do on earth.
youtube
AI Governance
2023-07-29T00:0…
♥ 21
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwPixmNRZwY7DH_1sh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8EdMVsCSihbI-eQF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxh-IlA7w6KILffsaZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxkfYuUfNw1t3nUt8F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyXvNm57AQ70yD2_Nd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsXUEffc09M9xs53p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyWiwp7AcJDXGxtQ9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnZASWZbB-TmNUbQR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyof9gBmUYi_Sin8jh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxp7zEw041on9lMnlB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]