Patents

What’s a patent? A patent is a document that gives an inventor exclusive rights to create the product that they invented. The main arguments in favor of patents are that that inventors deserve to own and be able to produce what they invent. Granting patents gives incentive for inventors to invent things. The argument is that if they don’t have exclusive rights to produce their invention they won’t invent things, because someone could just steal it and produce it at a lower cost and push them out of the market. Patents ensure that inventors are able to profit from the work they spent developing things.

Reading about the history of patents in software was really interesting. It’s not surprising that early on software couldn’t be patented. People didn’t really understand it. That’s honestly a major problem with laws about much of the tech industry. Courts and legislators have to make decisions about how things operate legally even when they have absolutely no understanding of the technology involved. Originally I was really pro-patents, because the arguments for them are straightforward and easy to understand and make sense, but listening to the podcast arguing against them really persuaded me. I think really highly of Elon Musk. I think he’s arguably the most innovative thinker alive right now. (or at least the most innovative person with a billion dollars to spend on stuff) So when he says that people shouldn’t use patents, I’m inclined to agree.

I don’t really think that software should be treated that differently than other things. At this point, software is a major field and programs are products just as much as tractors or whatever other physical things are. Not granting patents for software I think is a lack of understanding of the market and the development process.

That being said, I have sort of been convinced that we shouldn’t use patents altogether. The American people and American government are very opposed to monopolies. Early this year we talked about monopolies and the huge legal battles they can cause. Patents create monopolies. If the invention is significant enough, there could be potentially a whole market that one company owns and nobody else can touch because of patents. That seems bad to me. I also really buy the logic that getting rid of patents wouldn’t deter innovation (in most fields). In the software industry especially, the norm is to use other people’s code. It’s to collaborate and “steal” other people’s ideas and make them better. And I think that most of the time people don’t really get mad when someone else makes their software better. I think it encourages competition and promotes innovation.

There are industries, however, like in medicine, where patents still seem very valuable. I would hate to see funding taken away from drug research out of fear of not making money on it. The medical field is just a constant conflict of interest, because on one hand it’s about making people healthier and saving lives, but on the other hand, the companies need to make money for there to be the incentives to make it all happen. I liked the idea of shortening the length of patents, because I don’t think that would deter anyone from inventing or researching or innovating, but it would mitigate some of the monopolistic tendencies of the current patent structure.

When it comes to patent trolls, I respect the creativity to find a field where you can make money without producing a product or offering a service, but honestly they’re just assholes, and I do really think they prove that something has to be wrong with the current way patents are granted.

Self-Driving Cars

When it comes to self-driving cars, the debate revolves around safety, safety, and safety. Automakers claim that self-driving cars are going to make the roads safer. Other people disagree, saying that these cars will put them in danger. And the debate kind of goes in circles like that. In reality, I think that the allure of self-driving cars is the ability to get somewhere without any effort. Having your car drive you to the store while you watch Netflix is honestly really desirable, and the tech companies know this. I think that deep down that’s what they really think will make these cars sell. The problem with advertising that is it doesn’t look very good. When the argument against your product is “It’s going to put millions of lives in danger!” you can’t really respond by saying “But you could take a nap, or binge watch The Office on your way to Walmart!” The media would tear that to shreds. So they have to argue that the cars are going to be safer than human drivers, and I don’t necessarily think that’s completely wrong.

Socially, things definitely get sticky. It’s hard to say what to do in situations in which a crash is inevitable. My instinct is to say that the car should make whatever decision it thinks will cause the car the least amount of damage, because in 99 out of 100 situations, that injures the least number of people and saves the most money and everybody wins as much as possible. The problem is with that 1 situation in which this isn’t the case, and somebody dies, because the car wanted to avoid a smashed bumper. When you bring up situations like that, it’s hard, and I really don’t know how to deal with it. Running over 10 people vs. purposely crashing into a wall is a tough decision. But it’s also a decision that people almost never have to deal with. I think having a car that behaves safely in all the normal cases is pretty good, because I don’t think humans would know what to do in these situations. When it comes to who’s to blame, that’s also sticky. Obviously programmers have to be somewhat responsible for the behavior of their code, and then there’s also the automakers that build up the hardware, but there’s also the user that decides when and where to drive this car. So I don’t think one person can be to blame. It also depends on the situation of the crash. If a crash occurs, and it’s an obvious malfunction of the code, then yeah I think the programmers should be held liable or at least the company that produced the code. But if a self-driving car gets put in a difficult situation and crashes for a reason that’s not fully the fault of the code, I don’t think the programmer should be held liable.

I think that self-driving cars are going to become very normal in the next 10 years. It’ll start small with cars that park themselves. (pretty sure they already have those.) Then it will be cars that can drive in the country. Then cars that can drive in the city. And honestly, I really think that at some point in the future, cars driving themselves is going to be more normal than humans operating the cars. It’s gonna get messy, because crashes are going to happen whether it’s 1 in a million or 1 in a trillion. And when these crashes happen, people are gonna get sued, and the government is going to have to make more regulations, and people are going to be mad. But that’s technology. I think the government, like in any field, should make sure the technology is tested before it’s released to the public. I also think they should draft laws containing sanctions against self-driving cars that malfunction or crash. The government’s role will change over time as the technology changes and the number of these cars on the roads changes.

As for me, yeah, I’d love a self driving car. It’s like taking the bus, but I don’t have to share it with a bunch of creepy guys that smell funny. Like I said earlier, watching Netflix or taking a nap on the way to the store would be awesome. But I wouldn’t want to be on the cutting edge of it. Hit me up a few years down the road when they’ve been tried and tested. Then I’m in.

AI

At its simplest form, AI is a computer program behaving intelligently. To expand on this, the Turing Test states that for a computer or program to be considered artificial intelligence, it has to be able to convincingly impersonate a human. There’s definitely a distinction between artificial intelligence and human intelligence. One thing that I think is different between human intelligence and artificial intelligence is that AI is deterministic. Given the same program and the same input, we can know with 100% accuracy exactly what a program will do, and (depending on your philosophical views about the universe) people are not as easily understood. There are other ways that I think humans are different from AI, but they’re kinda just intuition, and I don’t really know how to describe them. I guess it’s the whole feelings thing. We don’t like to think that robots can have feelings, which makes us different.

I definitely think that some of these new programs especially AlphaGo show that AI is a real thing. Go is considered a pretty intuitive game in that the game states are difficult to represent in a computer program. There’s not a simple way of giving a board a specific score. The game requires thinking in a way that previously seemed impossible to be done by computers. It’s pretty crazy that a machine is able to make decisions in this way. I still don’t think AlphaGo thinks in the same way that people do, but this doesn’t mean it’s not intelligent. I don’t think that AlphaGo or any other computer could be considered as the same or comparable to a human mind, but the line is definitely getting finer and finer.

In terms of robots overthrowing humans and taking over the world, I think it’s kind of dumb to worry about now. I don’t wanna say it’s not possible, because who knows what could happen down the road. However, it seems to me like the possibility of that happening just seems so far down the road and so far fetched. We just don’t have computers making decisions nearly complex enough to create situations in which all of humanity is in danger. Yeah we have a robot that can win Go, but that doesn’t mean that he would kill someone in order to win the game. There’s also the problem of hardware restrictions. It doesn’t matter how intelligent computers get if they don’t have enough hardware to actually do damage. I don’t know. The concept honestly just makes my head hurt.

For now, I think the Turing Test is a good test for “intelligent computers”. The Chinese Room isn’t really an argument against the Turing Test in my opinion. Really it’s just an argument about computers being intelligent at all. I feel pretty confident in saying computers can be intelligent in some way, so I think that for now the Turing Test gets the job done.

Long story short, computers are getting really smart, but they’re not people and they’re not going to take over the world.

 

 

 

Project 3: Wikileaks reflection

We did a podcast about Wikileaks, and in that podcast we talked about Vault 7, Wikileaks newest disclosure that contains millions of CIA documents, many of which outlining hacking/spying mechanisms that they have. Honestly, Vault 7 doesn’t really phase me. I feel like you’re being naive if you don’t think the CIA is trying to spy on us. After Snowden, I just don’t think people can really be that surprised by the government spying on us. To be honest, I don’t really care. A lot of people do, but I just don’t feel like I am doing anything private enough that having the government looking at it bothers me. Maybe that’s dumb, but even the people who hate government surveillance and think it’s gone too far shouldn’t be that concerned with this. Maybe I didn’t read far enough into what the CIA is actually doing, but it doesn’t seem any worse than the stuff that we already know about the NSA. So no Vault 7 does not influence my views on government surveillance.

I think it’s pretty easy to separate the message from the messenger in the case of Wikileaks most of the time. In terms of its role in the news that it’s sharing, I see Wikileaks in the same way that I see most news sources like newspapers or magazines. I definitely think that there is some bias involved in the things that they leak, and I think that you shouldn’t take everything they say as definite truth. I really do think, however, that most of the stuff Wikileaks publishes is real stuff that was given to them that they’re just sharing with the public. So I think there are ways that Wikileaks paints a picture with the information it has, but I think the information is totally separate from Wikileaks. I don’t think they are fully responsible for the things that they publish, and I think that they have the right to publish whatever gets sent to them.

I definitely think there are times to be silent. I don’t think that whistle-blowing is always good. But I also don’t see Wikileaks as the whistle-blowers. I see them as just passing the message on to the next person. I think it is unfair to put the burden of deciding what is too dangerous or unethical on Wikileaks. That would cause them to insert themselves more and more in the content. That’s dangerous, because then when something bad gets published there’s nobody to blame and nobody is held responsible. But I also don’t think that’s Wikileaks’ problem either. I hope that dangerous content doesn’t get sent to Wikileaks, because if it does, I think they’ll publish it, and when they do, I think it will be hard to blame them for any negative repercussions.

Real (fake) News

Fake News, Fake News, Fake News seems to be one of the only things anyone, including the “real” news wants to talk about anymore. Fake News is the concept of writing and circulating articles around the web with untrue facts in them. It’s become such a hot topic, because people are saying that it played a significant role in the 2016 presidential election. When it comes to Fake News, I definitely think that circulating blatantly untrue stories around the internet is a bad thing and should be mitigated as much as possible. However, I don’t know how easy it would be for Facebook or other companies to prevent it. It’s a more complicated matter than just “It’s bad. Make it stop.” In my opinion one of the reasons it’s such a hard question is that news sources like the Onion have been releasing satirical content with untrue statements for a very long time, and I think most would agree that the Onion isn’t doing anything malicious.

When scrolling through my Facebook timeline, I don’t really see much of what I perceive to be fake news. Most of the content I see that isn’t direct posts from my friends are largely just videos about random unimportant things. When it comes to regulating the news that we see on Facebook, I don’t think I have a problem with Facebook picking and choosing what I get to see. I mean, it is their website. I can pick and choose anywhere I want to get my news from, so if I choose Facebook, I should have to deal with what Facebook gives me. Nobody watches Fox News and then says “Hey Fox News showed me news that I don’t like!” But just because I don’t mind them regulating the news that gets spread on their sites doesn’t mean it’s easy for them to pick out “Fake News” articles and remove them. Like I said before, I think a lot of it comes down to news sources like the Onion. (well actually I don’t know if you call the Onion news, but you know what I’m saying.) The Onion would lose a huge outlet if Facebook created some rule that all news posted on their site had to be true. And it’s also just a slippery slope for them, because while some of these “Fake News” articles are blatantly untrue, in other cases, facts are disputed. Given the bias nature of modern news, filtering out “Fake News” could get Facebook in a lot of trouble, because then it might turn into Facebook filtering out anything their curators don’t like calling it fake. And like one of the articles said, it seems like they already filter the trending stuff, but they don’t (at least I think they don’t) completely remove articles that they don’t like. When it comes to news, I think it’s up to the audience largely to be smart about what they’re reading and to know that almost everything is biased.

For me, I honestly read and watch very little news. I would say that word of mouth is probably my largest source for news, so I probably pay attention to this stuff way less than a lot of people. Honestly if an article or post starts to look political, it often causes me to move on and stop reading it.

I don’t think we live in a post-truth day. I think the vast number of news sources out there make it pretty easy to check the validity of these articles. If you read something that seems fishy and can’t find any other news outlets that have said similar things, it’s probably untrue. When the Times, the Post, and the Tribune all release articles saying the same thing, it’s pretty easy to believe them and identify whatever they’re saying as probably being true.

Corporate Personhood

What does the term “Corporate Personhood” mean? It’s the concept of a corporation being separate from the people that own it and work for it. It gives the corporation its own identity separate form any actual person. This concept is really important in legal contexts, because it affects what people and corporations can and can’t do. Proponents of corporate personhood think that corporations should have many of the same rights that people do like freedom of speech. It also means they are held responsible in the same way. For example, they can get sued. Many opposed however claim that corporations are not people, and therefore shouldn’t have these rights. Legal implications of it are that individuals don’t shoulder the whole burden of success or failure of the corporation monetarily. Investors won’t lose all their money if they company goes belly up. They also don’t have to take responsibility if the company gets sued. Money gets taken from the company, not individual members of it. Socially it means that in theory companies should be separate from the beliefs and ideals of its members. Kent Greenfield argues that the owners of a company shouldn’t be able to project their religious beliefs on a corporation if it is its own person. Ethically it kind of blurs things. I say this, because when the corporation does something, really behind the scenes an actual person or group of people made it happen, but now this entity is held responsible. So legally it’s pretty easy. If you mess up, the company has to pay. But morally, who’s responsible? Some would say it’s the corporation, but is the corporation a thing that’s really able to be responsible?

When it comes to the Microsoft antitrust lawsuit, I honestly don’t think Microsoft really did anything that immoral. I think they were trying to get a competitive advantage, and I think they had a product that at the time was better than anything anyone else had. I definitely understand from the government regulation side of things why it was important to put an end to that and promote more free competition, but I don’t think immoral or unethical are words I would use to describe the Microsoft practices. Maybe I’m naive and they were really bullying the competition out of the market, but packaging two products together and selling them as a joint thing doesn’t seem all that malicious to me.

I think that if corporations are given rights as people they should have responsibility like people. However, I think it’s a really fuzzy area. Corporations are an entity, and they can be treated like a person in certain ways, but a corporation can’t function autonomously. It needs people making decisions and doing things. When a corporation takes action, it’s because a person set that action into motion. So when you give a corporation rights, you’re really giving the people in charge the right to use the mechanism of a corporation to do whatever that right allows. In the same way, if you’re holding a corporation morally responsible, you’re really holding that corporation’s decision makers responsible. So it’s this weird combination of an entity that you can fine or sue or control but that doesn’t really control itself. It’s sort of just a layer of abstraction, and I don’t really know how that works from a moral standpoint. In the case of Microsoft, the corporation itself didn’t decide to package their software. Executives and developers and other employees involved made those decisions, so you can try to hold the phantom being that is Microsoft accountable, but Microsoft didn’t do anything. The people within Microsoft did stuff. But it’s hard to hold them accountable, because they used the corporate mechanism. So ethics really gets blurred here.

IoT: Great New World or Great New Danger?

The Internet of Things (IoT) is the concept of a global connected network of devices, not necessarily just computers. It’s basically baking internet connected computers into all of the other technology that we use: cars, refrigerators, printers, AC units, and a myriad of other things. Like much of technology, the possibilities of the IoT are super exciting, but with the excitement comes some fear and reservations.

The benefits of the IoT are really cool. The ability to control just about everything in your house from your phone is obviously very useful. Forgot to lock the doors? Use your phone. Want to preheat the oven? Use your phone. It’s going to show up in all different industries too. Keeping track of your grandmother’s vitals and sending them to the doctor in real time or keeping data on medication usage as well are just some of many ways in which the medical sector could be revolutionized by the IoT.

So Things seem pretty good right? Well, an entire network of newly connected devices has security implications on a huge scale. Cars being hacked and driven off the side of the interstate or webcams inside someone’s house being spied on are very bad things. They’re also things that haven’t been addressed in the past. Up until now, car manufacturers needed to make sure people can’t physically get into the car. And software developers needed to make sure the data being sent across a network is secure. Now there’s this middle-ground that nobody was responsible for before now. Somebody needs to make sure things are secure. The problem is that right now, nobody is really being blamed when thousands of printers  or DVRs are hacked and used in ways other than how they were intended to. Obviously, the hackers are punished if they’re caught, but the manufacturers are unscathed.

For now, I think that companies need to publish potential weaknesses at the very least. If they’re going to connect devices to the internet in insecure ways, their users should at least be aware of the risk they’re taking. At this point, I think that cyber attacks on the IoT are honestly a good thing. Security breeches are one of the only ways that security is going to be taken seriously and improved. Function develops out of necessity, and the more attacks that happen, the more people are going to pay attention to the security of these devices. As of now, customers don’t care about the security of the devices, so the companies don’t care either. Eventually, after enough attacks, companies are going to start to lose business or be punished by the government. Once this starts to happen, security will become a top priority.

I think the IoT will add to the role of technology of being disruptive in almost every industry. Uber is a big example of what the IoT can do. Manufacturers of just about everything are going to have to invest in technology, because it’s going to be commonplace for everything to be connected. I think in 5 or 10 years, people aren’t going to want to buy anything that they can’t control from their phone in some way or another. There’s surely going to be legislation about it. Things like the Wiretap Act or the Stored Communications Act were never relevant before the Internet. In the same way, laws are going to be enacted to deal with IoT stuff that hasn’t been dealt with before. Government is going to have to get involved. It’s unclear how involved the government will be, but there will have to be some sort of intervention eventually.

As for a world of connected devices, it is a little bit scary. Privacy will continue to decrease. There will be greater security risks, but the world will be more efficient. People will get more dependent on technology than we already are, and there are definitely pros and cons to that. Keeping people safe and keeping data out of the wrong hands will get more and more difficult, but the abilities of people to create and innovate will continue to increase with increased connectivity and capabilities. The old adage “With great power comes great responsibility” is a good way to think about the IoT and its future.

Gov. Surveillance vs. Encryption

The government wants to have access to our stuff. In theory, the motives are pure, but the fact is still that they want to be able to see our data when they feel like it’s necessary. Companies want our business. They believe that having the most secure, most private devices and applications will encourage more usage and better business. These goals are contradictory in nature, which is what has gotten us to the place we’re at now. There’s a lot of conflict, and this is a pivotal time for the tech industry. The courts are going to make decisions over the next few years that change the landscape of security and set the tone for how it’s going to work in the future.

In my opinion, the most compelling argument in support of government surveillance and back doors in the security of these devices is the analogy of the anarchic city. If the government has no ability to regulate communications at all, we create a space of anarchy. It’s not quite as dangerous as physical anarchy, because people can’t be physically harmed as easily, but there are still quite a lot of dangers associated with this. We all appreciated the safety and the support that government gives us, and we’re willing to make certain concessions in order to take advantage of this safety. We should make similar concessions when it comes to our cyber-safety. The “I’ve got nothing to hide” argument is also relatively compelling to me. I am of the belief that the government will leave you alone unless they think you’ve done something wrong. I’m willing to give them the ability to look at my stuff, accepting the possibility of them looking at it unlawfully for the protection of their ability to crack down on crime more successfully.

But it’s not as easy as whether or not the gov has the ability to look or not. There are technical implications of these things that make it more challenging. The points made about the security risks and technical challenges associated with these back doors are very compelling. I think the most glaring challenge is that if you create a way for government actors to access encrypted data or circumvent security, you open the door for malicious actors to do the same given the right attack. It’s true that the more code that is associated with a security system, the more potential vulnerabilities there are. Making things more complicated and adding new components to the code gives attackers more potential holes to get in. These are just several of the things that you would have to worry about if security was changed in this way. There are more problems than these. That’s why I don’t think it’s feasible to ask companies to provide back doors in their systems.

If a company has the ability to aid with a police investigation or divulge potentially dangerous information, they have a moral obligation to do so. Any way in which these companies can help without compromising security is important. They shouldn’t, however, be asked to build these functionalities into their products. If the technology reaches a place where it’s easy and reasonable to make these changes, I think it’s important. Companies should help the government in whatever ways they can. That being said, I think the current dangers of making these changes is too great to make everyone change how they secure customer data. I think government intervention is inevitable, but given the current landscape, I don’t think increased surveillance is plausible.

Interview Guide Reflection

In my experience, I’ve learned that nothing is more important than connections, or rather, the ability to connect with people. It’s like we talked about when we were discussing diversity in the work place, people tend to like people who are similar to them. They also tend to see people as being similar to them when they can form connections with that person. So when it comes to getting a job, I think networking and the ability to engage people and create connections is at the top of the list of importance. That being said, it’s a hard thing to teach someone how to do with a guide. So I think it’s great that we were able to bring attention to this aspect of the process, but gaining that skill is something that our guide will not be able to help a person with. So even though I think networking is the most important part of the job search process, I think the most valuable section in our guide is the section on resources for interview prep. In a lot of ways there are two steps to getting a job: landing the interview, and landing the job. Some people struggle to even get an interview, but some people are landing plenty of interviews and just can’t close the deal. Going into interviews in the past was pretty confusing for me. I knew that I was a social personable guy, but I really had no clue on what things I should have been doing to prepare. Doing your research on the company and practicing technical questions you might see in the interview are truly make or break actions that can have a serious impact on what happens in the interview process, which is why I wish I knew the things that I know now.

The question about college is a tough one. On one hand, not all learning is necessarily for the purpose of being able to complete a job, but on the other hand, that’s the primary goal of the majority of students coming out of college. With these things in mind, I think that colleges should (and probably have already started to) adjust their curricula to meet this need. I don’t think that theory and general learning about material should be forgotten, because those are really important in allowing students to understand what they’re doing, but I think there are some things that could be added or things that could be paid more attention. For example, I have come to the conclusion that with any given task in computing, actually writing the code is never the hardest part. The parts that take the most time and cause the most frustration are things like knowing all the different pieces and software involved and knowing how to build it and understanding the environment that it’s running in. And there are so many other miscellaneous components to programming, and I have felt that largely in my career the CSE department has ignored that to some extent. They have basically said “Here are the student machines. They’re Linux, and they’ll probably have everything you need for most of your classes. Happy programming!” And it’s great that we have a development environment that’s easy to use, but understanding how to set up our own development environment and compiling code on different OS and gaining the skills required to set all this stuff up is the type of thing that I believe to be slightly lacking in our curriculum.

 

 

 

 

Manning: Hero or Traitor?

Bradley Manning divulged classified videos and documents that he had access to through his position in the US Military. I think that ruling on how ethical his actions were isn’t as straightforward as some may think it is. When the government and the military are involved, I think it makes things a lot tougher to decipher. The nature of military actions in foreign countries is that of deception and questionable ethics. Killing people is part of their job, but it’s in order to hopefully save more lives than they’re taking and to save the lives of people from their own country. This makes judging actions as ethical much tougher.

My opinion of the government and the military is that they are definitely going to have to do some things that seem less than great if they want to succeed in keeping Americans safe and keeping America successful. I also think that most of the things they’re dealing with are extremely complicated. Actions that might seem awful to the general public probably make more sense when all the information is available. Many military officials would claim that divulging that information to the public is dangerous and not helpful. I’m sure he had been trained to not divulge information and had signed numerous documents stating that he wouldn’t misuse his credentials. I’m inclined to believe that it’s not ethical for him to violate those.

The other side of the argument is that he wouldn’t have shared these files if he didn’t feel that they were evidence of clear wrongdoing on the part of the organizations in charge. Manning claims that he thought that these videos depicted crimes committed by the US military. If that’s the case, sharing those with the world would be ethical, and he would be protected by the Whistle blower protection laws.

My opinion is that he was an extremely troubled individual with a really hard upbringing, and that personal reasons affected his decisions. After years of struggling, he ended up feeling disillusioned with army, because of various reasons, some related to the work he was doing and some unrelated. From issues with his family to issues with his platoon to identity and gender struggles, Manning had a hard time supporting the army and doing his work happily. I think that the WikiLeaks organization took advantage of a troubled individual who didn’t feel like he belonged. Manning had the power to give WikiLeaks what they wanted, and he was frustrated enough that he was willing to give them the type of info they desired. So in the end, I don’t think that Manning made an ethical decision, and I don’t think he should have been protected by Whistle blower laws, because the government is doing the things they’re doing in the name of public safety. But even though I don’t think he acted ethically, I don’t think it was all his fault. I think he was manipulated by the WikiLeaks organization.