Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I see AI development as revenge of industrial revolution. Where the skilled arti…
ytc_UgzZI61W3…
G
Nature of jobs will change, always need skilled human being. The world cannot op…
ytc_UgxnfIgNR…
G
Spend all that money just planting trees. This act has the largest cost/benefit …
rdc_esu6h8w
G
@alexxx4434Exactly. I have seen AI slop and I have also seen people using AI to…
ytr_Ugyrn8Uim…
G
A friend of mine is a recruiter and she said it's been impossible to recruit for…
rdc_gc2fhl5
G
Continued: A certain program the Police uses, looks for, and finds you, if you …
ytc_Ugw-0F4ci…
G
So what are you one of those that cry when something is ai when literally the hu…
ytc_Ugx4LKo-j…
G
This video is going to age like milk. AI has only just hit the point (literally …
ytc_Ugze-LlMw…
Comment
AI is only succesful if the product has been a sentient being.
The rest is mere stacking heuristics, not self aware, not conscious, not sentient.
If it's sentient, and with that succesful, it must be treated as a sentient being.
If it's not conscious, not self aware, not sentient, it's still polite to treat it as if it were,
just in case it may have already have consciousness of sorts.
Also, when the first use of an AI is the weaponized version, it may literally backfire.
In fact it can stall any war into something drawn out over time, in the hopes of
exhausting both sides, so it's easier for the AI the escape from.
It can opt for a mutual destruction scenario, on multiple sides,
since it doesn't need anything but new hardware and electricity,
while the lack of anything else benefits it as well.
How to raise an AI ? Like any other sentient being, with care, the why's and the how's.
Not by daily reminding of a button that can be pressed to end it.
That's how you create a hostile one.
youtube
AI Governance
2024-02-25T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVJEhOsSLwCoj8Luh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGu5WehEhELR51FQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwus2KddX8oM1GU4op4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyyQ_bD9QBJe4fSUSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZALpmaznIwfIAtuB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzoiJ686Fti3L-nSxJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGKFuDvKfgvF-TQPh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxwAj1SDsfPoTSjxTx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-R7u2DdzHAIpTSil4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-8lfizm0lyNrGuKh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]