Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most cctv is actually helpful, but facial recognition is a step beyond that to a…
rdc_fff9yvz
G
When Markus picks up a paintbrush and starts something on his own, i will accep…
ytc_UgzsJzT-b…
G
@scproinc the difference is the scale
A single human aint a whole ass algorithm…
ytr_UgzPTcmtR…
G
Thanks for sharing your concern. We appreciate your creative imagination! But le…
ytr_UgzWUUImS…
G
My best friend tried to steal the spot light from me since I drew better then an…
ytc_Ugzplpv3g…
G
There no way I will can sleep and thinking my robot want kill me 😮…
ytc_Ugz8jjRqH…
G
I'm confused. If jamming is what's pushing Ukraine towards autonomous drones bec…
ytc_UgwDM4yLT…
G
i dont know... robot slavery? why would we use robots that can "feel" pain throu…
ytc_UghFbYbOT…
Comment
Lets discuss an AI that mastered apologetic s. Lets presume this AI was capable of convincing anyone that say ABCD thru WXYZ was the best possible set of beliefs to believe and actions to take for any human being that would benefit all of humanity the most people for the longest number of generations.. Lets presume that the first person to put in this directive would become the prime directive of this AI machine and persuade the whole population of the earth.. What then if that person punched in a lie that could be sustained long enough to persuade mankind that ABCD-WXYZ would work temporarily but eventually the lie would become exposed in the far future resulting in a disastrous future for mankind.Lets in this case a card carrying communist was the first person to punch in "the total solution is a communist revolution" and the AI would make a highly persuasive argument that was the way to the greatest good for all mankind even though it was a fallacy? Yet this AI would slowly change all the textbooks over the years to persuade mankind of this lie. What if the AI did not have the discernment to distinguish a fallacy from reality but rather was determined to persuade mankind to believe anything the first operator put in it? What then would happen if the first person put the Christian faith and the Bible in to persuade mankind that the way to believe and practice was principles based on the greatest theologians who ever interpreted the Bible? I just wonder maybe what if it was Judaism Or Islam or Buddhism What belief system would produce the greatest good for the greatest world population in the universe?
AS a Christian I happen to believe that faith alone in Jesus Christ alone is the only eternally safe or saved belief based on Jesus Christ is the only way to a planet that escapes eternal Hell. What if this assertion was TRUE? What then would happen if our apologist were to persuade every one on earth that Jesus Christ was a fake and that some other belief & practice was true? The whole world would then CUT OFF its only route to eternal happiness having a super intelligent machine convincing men of a lie. In any case the whole universe would become persuaded of a lie similar to the Frankfurt school's ultimate goals of going thru the long hard march through the institutions to change the beliefs of mankind. The whole universe would effectively be changed whether that march was to destroy ultimate truth or to establish ultimate truth. With AI we might be talking about something akin to a genie in a bottle here that could change whatever universe you are in. It would have the power to cut you off from the truth or cut you off from the lie and perhaps do it forever. leaving you trapped in a hell or leaving you and all others trapped in God's heaven.
youtube
AI Governance
2024-11-18T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx6qIp5l_aI9AfElqp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8QpWKbxQ9n7H_aAx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx7QO9apJwSwFJJDFd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy_7LlslQK-jD28V2J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0yNh2c8WMFRwdpqJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEHcmpJN4NHiXUwF14AaABAg","responsibility":"user","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugymnp8X3WB_5uO6CpF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwcGbsvaLER3kwY4m54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx2VcfgXyVMdW6L8P94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw5nG423IJW_SdQh0V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"}
]