BHR GENERAL CHAT & PLOT THREAD!

This is an archived version of FeralFront. While you can surf through all the content that was ever created on FeralFront, no new content can be created.
If you'd like some free FeralFront memorabilia to look back on fondly, see this thread from Dynamo (if this message is still here, we still have memorabilia): https://feralfront.com/thread/2669184-free-feralfront-memorabilia/.
  • I do agree with that, I was just thinking over a more short-term period. After all, whatever may happen in the next 100 or 200 years, we don't know if we'll even get here. There's plenty of time for society to decline, either by our own actions or for some natural event to stunt our growth. That could put us back a few centuries for all we know. Still, that's just from a classicist's point of view, with globalisation and technology we might have finally stabilised. I have no idea.


    Further, if we have the ability to make that kind of ai, then wouldn't that mean we can completely understand the human brain? Then what stops us from copying it over and creating sentient machines that are essentially us? There's been sci-fi about it, so I don't think it's unlikely people will at least try to move in that direction. Then it would be difficult to start a war since people strive for a benefits. But then that's a whole new can of worms.


    Really what I'm trying to say is that there's no way to see what will happen. The reactions of governments, opinions of people, how much regulation there'll be, how quickly will the public gain access, etc etc. There are so many factors that it's hard for me to actually have an opinion either way.


    ikr, same.

  • Ahhhh excuse my high and mighty attitude but if you guys want to get into the AI debate, you should probably look into actual trends in AI as well as the consensus of the scientists working on it. I've taken three AI classes at university level, and when we debate philosophy, the professors start by completely dismissing all of pop culture's ideas of AI. Because pop culture mostly doesn't understand AI at all -- there are some scifi writers that are writing AI close to what it is, but I'm talking about things like Star Wars and Hal 9000 and Terminator as pop culture, for example.


    Right now, it's not even clear whether computers could ever reach consciousness -- this question is essentially part of the singularity debate. We're not sure if the singularity will happen at all: when machines surpass human intelligence (as, once a machine becomes sentient, it would surpass human intelligence by definition really). Elon Musk iirc is one of the main people in the technological singularity camp, while most of the other big people in AI are dismissive of the singularity happening.


    Most researchers in AI do not think that a singularity is a big concern, anyway. Especially not where we're at. To many of you guys, you probably see all the AI advancements as insanely cool... but they're really basic honestly. Right now, a bug is smarter than an AI. Literally.


    AI requires many things to become better than humans at a task. And this is just one task.

    - gigabytes/terabytes of well-labeled data in a well-organized dataset (depending on the complexity of the problem, but takes a lot of data)

    - lots and lots of AI teaching itself (depends on the algorithm you use, but even the best ones take forever to run)


    What is more likely to happen, is that humans will use AI and other advancements in computer science / computer engineering to augment humans. Then, the augmented humans will probably become completely different from the way humans are now... but we're already very different from humans one thousand years ago.

  • The problem with looking at things as they are currently is that has never been how humans have worked. You never look at what we have and go "Well we're so far from it right now that it's not going to happen". It's happened countless times throughout our society and almost always is wrong. Sometimes culture gets ahead of itself and people predict things far too early, like flying cars for example, but the truth of the matter is that technological growth is exponential. Not as impressive, but literally exponential. It's like the saying 'The rich get richer'. The more we know the faster we advance, rinse and repeat. Phones today are more powerful than what we went to the moon on a around fifty years ago. AI is absolutely awful today, but it's not about today or in ten years. If your talking short term than I agree human augmentation will happen first, almost certainly since humans themselves are already functioning beings that would just be being altered, but the reality of artificial intelligence surpassing human intelligence is more of a when than an if.


    I close with the fact that most researchers thought quantum mechanics were bullshit during their inception. I tend to not agree with "well scientists aren't worried about it right now" because science is about breakthroughs rather than agreed upon consensus. Simply because they aren't worried now is not an argument for anything that is being talked about occurring with the next hundred years.

  • I am so here for this AI conversation


    While we’re mentioning AI in pop culture, has anyone seen Ex Machina? My AI lecturer recommended watching it as an interesting take on sentient AI.


    As wildly inaccurate as AI movies can be, I highkey love them

  • I'm with you there! Culture portrays AI terribly sometimes, but man does it seem awesome if it were true. Well, maybe not awesome. Cool for sure, but I'd debate whether the extinction of humans is 'awesome'. xD

  • I mean, it'd probably do some good for the planet, so that part would be pretty great, considering humans are the main cause of the extinction of so many different species of animals and types of plants

  • True we really have messed the planet up, so in a way it would technically be doing some good. I mean we're sentient so it would be way worse for our extinction, but at least it wouldn't be a zero sum gain kind of situation. We'd at least get something out of it xD


  • I feel like I have to mention that I have looked into AI. Though, obviously, not at a university level, so it's still fun to talk about what ifs. That's what I meant by resources, currently AI is limited by hardware and the algorithms. Sure, it can teach itself, but that's still trial and error. As wolfie said though, who knows in the long term? People didn't even know about atoms, we (scientists included) might think less than we expect.


    The point on people 1000 years ago made me think though, now I really want to write an essay on the difference between people in antiquity and us.


    I haven't seen ex-machina yet. Actually I haven't watched many sci-fi movies in general, might go and find some next week.


    edit;; apologies for rushed typos

  • The thing about progress being exponential is that that's just a model -- and isn't even entirely true, and we have no proof that there isn't a tapering off point, either. The Egyptians had pyramids, but we lost that knowledge for thousands of years. It could be quite simple that tomorrow someone makes a breakthrough that proves tomorrow why AI will never become sentient or someone tomorrow manages to break the factorization of prime numbers being NP-Hard (which is what makes the internet secure, btw).


    Scientific consensus is absolutely important. Despite a sliver of examples in human history of progress (which typically comes about with a paradigm shift, but that's Khun and we're not talking philosophy of science right now) where some singular person came up with a new method that changed the game... the consensus of scientists had been fairy accurate. There is no absolute proof of global warming, but the scientists that study that pretty much agree that this is real. There is no proof that NP doesn't equal P (which, if NP = P, breaks a lot of cryptography and computer science), but almost all computer scientists agree on this fundamental law. Before Newton, astronomers as a community knew there was some 'force' that kept the planets circling the sun -- Newton just proved it.


    In the manner of scientific progress, scientific consensus usually comes before absolute proof. Sometimes it's wrong, like with the whole blood-letting in medical history, but it's usually quite accurate. Sure, quantum mechanics had a rough start, but once there was enough evidence, scientists came to the consensus it was possible -- and now we have them. Even in that example, the computer science community considered quantum computers possible before they actually happened.


    Science is about breakthroughs. But almost all breakthroughs in science from the modern era have happened after scientific consensus thought it was possible.

  • I appreciate the set up for why scientific consensus matters, but you missed the point on what I meant when I said "science is about breakthroughs rather than agreed upon consensus" judging by the fact that none of the above was a counter argument, instead just being an explanation for what it's important to the grand scheme of science. It's understandable though since my point was very rushed in comparison to what I wrote earlier since I had to go make my sister food. I didn't exactly eloquently word my argument there, my apologies.


    What I was trying to imply was scientific consensus about current technology isn't as important when talking about far future developments like artificial intelligence, where a breakthrough in understanding will matter far more than ten years of the scientific community agreeing previously discovered information. It very much depends on how you look at it. Sometimes leaps in knowledge come after the scientific community has agreed upon something enough to encourage further analysis and confirmation testing. Other times, where most of the large leaps come, is when someone or a small sub-section of the scientific community make a breakthrough that disagrees with previously stated information. Of course AI is currently a non-issue because it's still in its' infancy, or at least its modern infancy since we have had the potential of actually doing something with the idea. It's very likely that given pass trends of science, especially when it comes to computer sciences, that current consensus won't remain the agreed upon understanding. When talking about something potentially very different than our understanding of it now, you have to make educated predictions about what things might look like going forward.


    Its the much safer assumption to simply agree with the majority since it removes the possibility of looking bad among friends/peers, but that's no fun especially when it comes to debating something as wildly unpredictable as the future of artificial intelligence. Someone has to represent the more unlikely, but still possible, side of the argument or else there's nothing to debate.

  • BLeh I'm gonna stop debating with you since it's pretty clear we have different definitions for a lot of the words we're using. I don't wanna waste my time clarifying the hidden assumptions behind each of our arguments lol

  • Understandable all things considered. You've taken university level classes on it so you're probably right anyway by way of having more knowledge on the subject. xD

  • Might be slow posting again, work has me doing a lot more things. Which allows me to not have all the time in the world, I'm still replying to threads planned and what not though. Makes me feel terrible to admit that I have to slow things down, but it's gonna be done.

  • that's understandable, life happens. I'll keep in mind to reply to the thread between these two tomorrow, but take your time.


    also on that note, I'll try and get some open threads up tomorrow! how is everyone?

  • Pretty good! I just finished playing some Divinity with my friend, and man that game is really fun even if pretty difficult. How are you doing~?


    Speaking of threads though I need to reply to the thread with Crown and Astral later today. I got super flooded the other day so I forgot to do it x_x

  • No worries we all get swamped sometimes XD I had like a whole lot of muse for Crown earlier but then I logged on and my brain was like sike


    I’ll try to reply to things/make open thread anyways though!