TL;DW: i'm
alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI),
irrelevant for artificial intelligences even greater than that (GTHAI), therefore simulation theory is impossible.
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI),wow, I didn't know that!
irrelevant for artificial intelligences even greater than that (GTHAI), therefore simulation theory is impossible.
On Friday, April 15, 2022 at 9:57:19 AM UTC-5, Don Stockbauer wrote:
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
So go think about it some and when you've come to the realization that you can't steal someone's identity by only knowing their age and gender and such then go ahead and send me another letter.alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI), irrelevant for artificial intelligences even greater than that (GTHAI), therefore simulation theory is impossible.wow, I didn't know that!
Kind of a shame I did like riding to you
On Friday, May 20, 2022 at 5:17:21 PM UTC-5, Don Stockbauer wrote:of them but you can get a lot until something stops it it's only a potential like the person dies or the persons hands are cut off first person runs out of paper but I'm thinking now there's all kind of things that have this potential infinity infinity
On Friday, April 15, 2022 at 9:57:19 AM UTC-5, Don Stockbauer wrote:
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
So go think about it some and when you've come to the realization that you can't steal someone's identity by only knowing their age and gender and such then go ahead and send me another letter.alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI), irrelevant for artificial intelligences even greater than that (GTHAI),wow, I didn't know that!
therefore simulation theory is impossible.
Kind of a shame I did like riding to youonce I am noticed is there is a potential infinity of a lot of different things like emails I can be sent letters that can be handwritten and sent potential means well I take handwritten letters you can get letters from someone not an infinite number
On Friday, May 20, 2022 at 10:25:36 PM UTC-5, Don Stockbauer wrote:of them but you can get a lot until something stops it it's only a potential like the person dies or the persons hands are cut off first person runs out of paper but I'm thinking now there's all kind of things that have this potential infinity infinity
On Friday, May 20, 2022 at 5:17:21 PM UTC-5, Don Stockbauer wrote:
On Friday, April 15, 2022 at 9:57:19 AM UTC-5, Don Stockbauer wrote:
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
So go think about it some and when you've come to the realization that you can't steal someone's identity by only knowing their age and gender and such then go ahead and send me another letter.alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI), irrelevant for artificial intelligences even greater than that (GTHAI),wow, I didn't know that!
therefore simulation theory is impossible.
Kind of a shame I did like riding to youonce I am noticed is there is a potential infinity of a lot of different things like emails I can be sent letters that can be handwritten and sent potential means well I take handwritten letters you can get letters from someone not an infinite number
Well, it was all over at the point that you came back to this user group and started mean mouthing me here I've wasted all this additional time oh well I have nothing better to do so why not
On Friday, May 20, 2022 at 11:41:48 PM UTC-5, Don Stockbauer wrote:number of them but you can get a lot until something stops it it's only a potential like the person dies or the persons hands are cut off first person runs out of paper but I'm thinking now there's all kind of things that have this potential infinity
On Friday, May 20, 2022 at 10:25:36 PM UTC-5, Don Stockbauer wrote:
On Friday, May 20, 2022 at 5:17:21 PM UTC-5, Don Stockbauer wrote:
On Friday, April 15, 2022 at 9:57:19 AM UTC-5, Don Stockbauer wrote:
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
So go think about it some and when you've come to the realization that you can't steal someone's identity by only knowing their age and gender and such then go ahead and send me another letter.alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI), irrelevant for artificial intelligences even greater than that (GTHAI),wow, I didn't know that!
therefore simulation theory is impossible.
Kind of a shame I did like riding to youonce I am noticed is there is a potential infinity of a lot of different things like emails I can be sent letters that can be handwritten and sent potential means well I take handwritten letters you can get letters from someone not an infinite
Well, it was all over at the point that you came back to this user group and started mean mouthing me here I've wasted all this additional time oh well I have nothing better to do so why notDo you think Putin will kick off World War III?
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI), irrelevant for artificial intelligences even greater than that (GTHAI), therefore simulation theory is impossible.wow, I didn't know that!
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI), irrelevant for artificial intelligences even greater than that (GTHAI), therefore simulation theory is impossible.wow, I didn't know that!
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI), irrelevant for artificial intelligences even greater than that (GTHAI), therefore simulation theory is impossible.wow, I didn't know that!
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI),wow, I didn't know that!
irrelevant for artificial intelligences even greater than that (GTHAI), therefore simulation theory is impossible.
On Friday, April 15, 2022 at 9:57:19 AM UTC-5, Don Stockbauer wrote:
On Tuesday, April 12, 2022 at 11:36:17 PM UTC-5, pataphor wrote:
TL;DW: i'm
The most impossible task on the face of the Earth is to get someone to communicate with you who doesn't want to.alignment is immoral for the infinitesimally small greater-than-human-intelligence range for which it works (GTHI), irrelevant for artificial intelligences even greater than that (GTHAI), therefore simulation theory is impossible.wow, I didn't know that!
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 379 |
Nodes: | 16 (2 / 14) |
Uptime: | 66:44:16 |
Calls: | 8,084 |
Calls today: | 2 |
Files: | 13,068 |
Messages: | 5,849,424 |