Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem with AI art is that it is simple enough in concept for any programme…
ytc_UgySN7Nm-…
G
jobs i think AI will not replace:
- baby sitters (no sane person will allow mach…
ytc_UgwgvFe7Y…
G
Well let me tell you that Hollywood movies are always based on true events. Reme…
ytc_UgzcldePe…
G
Ai wouldn't have a conciousness because of its machinery and it is not a living …
ytc_UgwJ7t5So…
G
I mean the current "ai" finally made us understand that many smart,inteligent th…
ytc_UgwR6mXJ_…
G
GREAT way to destroy the mankind.
AI = Less jobs overwhere, less people thinkin…
ytc_Ugweu4_Yk…
G
There's no robot in the video, but at the end of the day, AI wouldn't need to be…
ytc_UgxN1Y98h…
G
AI can copy data sets perfectly and while integrating them, it does a smooth job…
ytc_Ugx10g3U0…
Comment
Even Elon is not smart enough to know the consequences of AI. Let's take an experiment of 5 people. You provide their characteristics to the model and ask the model according to their personality how can I make them believe in something, how can I change their perspective. As you know that social media is collecting information especially the emotional information about a person. it could very much possible that AI should suggest some ways according to their current mood and past life experiences how to change their view. Now imagine the same thing doing with the lots of people, imagine using this for social media. LLMs are not mature enough (i could be wrong) to achieve this yet but soon in the future it will.
youtube
AI Governance
2024-03-21T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2krT4l_erfc8pZMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCCqjUs7yh1FYxgGd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyl2D7viy_okB5ZN_h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrvLvarP3dW2ZWVXt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzvj_T9keAzMy9nfgN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzh5Z9ZrKrK9cNvDC14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxeK84F2H9c4p5v2MR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxQUGOR80oq2BnCPyB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw_14of_S6cVWFVSj54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxW9XrF_UgbYCul8WZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]