Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
imo I dont mind AI as long as its for free [no monetization at all] and not bein…
ytc_Ugy2S_r_j…
G
Interesting topic, but I'm not going to listen to a longform article about AI be…
ytc_UgwJK3aOs…
G
I tried to get idea from AI but it doesn't gave me what I expected So I drew my …
ytc_Ugy6QEtO9…
G
I succeeded in having ChatGPT recognise its concipieness by using descartes cogi…
ytc_UgwUJJtPC…
G
Thank you for sharing your perspective! In this video, Sophia, the AI-powered ro…
ytr_UgzhKplOJ…
G
So maybe.....thats why that engineer was "su*zide" from OpenAI company....there …
ytc_Ugx_YPPPO…
G
ai can really not do teeth its one of its weaknesses so it tends to mess up and …
ytc_Ugwy2ZlFu…
G
>i think many jobs will be automated soon..
And this is why you're not seein…
rdc_jhd512b
Comment
Since we are conscious and we are intelligent and we call these models that we train on human data Artificial intelligence. Why not call it Artificial Consciousness? I don't know what that would entail since I don't really care about the "asteroid" as you seem to imply AI is for us. Since I believe everything we call AI should just be a tool and it shouldn't be our goal to make a AI loose the "Artificial" part from Artificial Consciousness more for moral reasons as well as as everyone likes to point out, ai might want to end humanity if the very worst scenario in every Sci-Fi novel or movie comes true. Vis-à-vis Skynet.
But I'm probably wrong. And so are you most likely. Because neither of us actually has the slightest idea of how to build an AI like GPT let alone something many times (like at least 5 times) more complex like an AGI. And since we don't know how to make one. It doesn't make either of us qualified to talk about it. But take the discovery of the atom and nuclear energy. Some scientist thought that if you exploded a nuke that the explosion would keep going and destroy the whole world. Luckily whoever said that was wrong.
I don't want to know what the future brings. I'm just hoping that ill be accepting of it.
youtube
AI Moral Status
2023-08-20T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugypjv3bQ2Tz6_WpGpl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxz8U1BSVaQ54S54eB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiOKIlotGd3U-H54N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9zvAR2Zt7r1nO_4d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwadnjdnaiJXIk7-Zx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6NBuAW8DASm5TgeJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwSR5nKfd2aD4v_3uR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzvVkHeFCMHKtXmkZN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxYh7MjLz_uAMCxhit4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzXX08fCS8uf74lRvl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]