Wednesday, September 24, 2008

Web 2.0 versus Web 1.0

The reading for September 24 explained the differences that differentiate Web 1.0 from Web 2.0. The Netscape versus Google section of the text really clarified the differences for me. I have used Google as long as I can remember for navigating the Internet. However I used to use Netscape to get to Google. Web 1.0 is considered soft ware where as Web 2.0 are applications that run on soft ware. Soft ware relies on being updated every couple of years to stay up to date, Web 2.0 applications update themselves continuously. This makes Web 2.0 superior. Another feature that makes Web 2.0 applications more successful is the innovators of Web 2.0 applications understood the collective power of small cites that made up the bulk of the web’s content (O’Reilly, 2005). Why would you go after a small market of big fish when there is a sea of little fish that will out weight those few big fish? That is what made Web 2.0 the power house that it is today. Web 2.0 met the needs of 80% of people whose needs were not being met. The rest of the reading further explained Web 2.0 and how platforms are soon going to be in completion for their existence as Web 2.0 further dominates the Internet with its innovative ideas

I feel the BitTorrent approach to internet decentralization paved the way for the Web 2.0 applications. “There approach says that every client is also a server; files are broken up into fragments that can be served from multiple locations, transparently harnessing the network of down loaders to provide both bandwidth and data to others users” (O’Reilly, 2005). This was exactly the concept of Napster. Napster provided an application so that people could trade music. Their catalogue of music was only as good as what the members of Napster were contributing to the site. Napster built breadth of collection from its users. They gave access to a site that cost money to make and maintain in order to get breadth of their collection. Then since they got so popular they could then charge money to advertise on their site.

Bibliography

O’Reilly, Tim. (2005). What is Web 2.0: Design patterns and business models for the next generation of software. Retrieved August 21, 2008 from http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html

How Usenet groups Succesfully Use Bandwidth

There are many computer-mediated communication systems on the Internet. All communication has inherent problems and computer-mediated communication is no different. The biggest problem that confronts computer-mediated communication is the ability to cooperate for a common good. The term “the common good” as it will be used in this paper will refer to the free flow of ideas on the conversation floor that allows all members to accomplish his/her interactional goals (Kollock and Smith, 1996). The computer-mediated communication that will be the focus of this paper is a group that was observed on a Usenet. The group was titled “Thoughts on Current Economic Condition.” One of the most important aspects of being able to provide all members of a group with their common good is to conserve bandwidth, or at least not abuse it. “Abuse of Bandwidth refers to posting extremely long articles, reproducing long sections of text from a previous post rather than summarizing or excerpting the relevant passages, or including long signatures full of comments and diagrams at the end of a post” (Kollock and Smith, 1996, p. 115). This paper is going to discuss how the observed group successfully used bandwidth, and how that resulted in and effective and efficient means to exchange information and carry on their discussion.

The group observed was titled “Thoughts on Current Economic Condition.” As one could imagine, this was a particularly hot topic given the recent events that have been taking place in the economy. Jon Slaughter was the originator of the group. His first post set the stage for a thread that was very conducive to good discussion on the economy.

Posting extremely long articles is considered an abuse of bandwidth. “Bandwidth not only refers to the amount of information that the Usenet can carry and store but also the capacity of members of a group to attend to and consume the information” (Kollock and Smith, 1996, p. 115). In other words, if members of the group post extremely long articles, the Usenet might not have the capacity to function as it ought to. It also means that members of the group might not be able to keep up with the information being posted by other members. This could hinder them from being able to carry on with the discussion in an effective and efficient fashion. This was not the case in the group being discussed; no one in this group free-rode off of other’s efforts to conserve bandwidth. A technique that many members of this group employed for conserving bandwidth was including hyperlinks to other useful sources of information that pertained to the discussion. If the members of the group included all of the information that they wanted to share from the hyperlink in their post that would surely reduced how efficient the Usenet operated and made the effective exchange of information more difficult.

Reproducing long sections of text from a previous post rather than summarizing or excerpting the relevant passages is also considered an abuse of bandwidth. Members of this group did a good job avoiding this. Paul Hovnanian is excellent example of a member of the group that thoughtfully excerpted sections of other member’s posts to comment on. At 9:09 pm on September 16, Paul excerpts five lines from a twenty line post written by Jon Slaughter that he used to contribute to the discussion. This would give someone following the thread a clear idea of exactly what Paul was responding to, without him having to abuse bandwidth.

On the Usenet Google Groups, there is a function to hide quoted text. It allows members to hide quoted text so that their posts are not overly long, but if one so wishes he/ she can observe exactly what that member was quoting by clicking on the “show quoted text” link. Seeing as how people could not see the text that person was quoting, unless they want to, it is remarkable that the person posting the comment did not think that reproducing huge sections of other’s posts would have an significant effect on the bandwidth for others, but they did (Kollock and Smith, 1996). They summarized or excerpted the relevant information from other people’s post just as if they were posting it for all to see, further reserving bandwidth. At 3:36 pm on September 17, Kris Krieger excerpted information from Jim Thompson’s post that was relevant for him to comment on and hid the quote as well, so one would only have to read it if they wished. This further helps the members attains their goals of exchanging information and answering questions because now they do not have to re-process information that has already been posted.

The last abuse of bandwidth that will be discussed is using obnoxiously long signatures at the end of a post. These include signatures that have diagrams and comments at the end of them. There was no real culprit of using too long of a signature. Jim Thompson had the wordiest signature. His signature included the name of the company that he works for, department in the company he works in, all of his contact information and a funny comment plus his actual signature. This does not qualify as free- riding off of others efforts to conserve bandwidth by not using overly long signatures. This is appropriate information to have at the end of a post for other members to be able to contact him. All the other members just had their names for signatures or nothing at all. These members just might not want to be contacted outside the group so they left this information out.

The ability of computer-mediated communication to function as it should rests on the member’s ability to cooperate. The group that was observed for this paper is an excellent example of members of a group cooperating to meet their goals, effectively exchanging information, and carrying on their discussion. This group could have never helped its members attained their goals if everyone did not work together to conserve/ properly use bandwidth.


Bibliography

Kollock, P., & Smith, M. (1996). Managing the virtual commons: Cooperation and conflict in computer communities. In Susan C. Herring (Ed.), Computer-mediated communication: Linguistics, social and cross-cultural perspectives (pp. 109- 128). Philadelphia: John Benjamins.


http://groups.google.com/group/sci.electronics.design/browse_thread/thread/6c17561943f40637#

stategic use of excerpting relevant information

Paul Hovnanian is excellent example of a member of the group that thoughtfully excerpted sections of other member’s posts to comment on. At 9:09 pm on September 16, Paul excerpts five lines from a twenty line post written by Jon Slaughter that he used to contribute to the discussion. This would give someone following the thread a clear idea of exactly what Paul was responding to, without him having to abuse bandwidth.

Tuesday, September 23, 2008

long signatures

Jim Thompson has a rather long signature at the end of his posts. This is an abuxe of bandwidth. I do not see the need for him to have all of his contact information at the end of every post. This gentleman is just looking for someone to have a conversation with. There is also no need for a different joke at the end of every post either. Jon slaughter on the other hand with a lot of other people keep there signature short and concise with just their name.

including links in the discussion thread

I feel that the group I am observing is using bandwidth very well. A lot of people are including links to information that positively contributes to the discussion but they are not just copying and pasting the information into thread. This conserves bandwidth and yet provides useful information if one so chooses to go read it.

Sunday, September 21, 2008

First group observation

John Slaughter started the group that I am observing. He is concerned that the United States is heading for an economic disaster. He believes that it is the greed of corporate America that is to blame for the current economic instability.

Some people in the group are using the bandwidth wisely. They are not quoting whole texts, however, some people are. I found this very aggravating because I felt that I had to read the whole quoted section so that I could understand what the person was commenting on. Ross Herbert is one the biggest culprits of quoting large sections of other people’s postings. He quotes lines of other people’s postings then only contributes a sentence or two of his own thoughts.

No Spam uses bandwidth wisely. He/ she only quotes a line or two from another person’s posting but contributes lines of her own thoughts.

Wednesday, September 17, 2008

New Media and Web Production

Let me first start out by saying that the reading for September 17 was excruciating to read. The reading was filled with jargon that was directed toward an audience that already had a background in the field of internet design. Quite frankly, I do not. I am lucky that I know how to check my email. With that being said, let us decipher what I actually got from this week’s reading.

This week’s reading started off by giving a more detailed history of the development of hypertext than passed readings. It was interesting to find out that the idea for hypertext might have been stolen from the British Telecom. Everyone enjoys a good scandal, and I am no exception.

Next there was the section on digital imaging. Jason Whitaker said that at one point a photograph was the cornerstone of truth, but now because of digital imagining one cannot be sure if what you are seeing is real or manipulated in some way. Pictures have always been manipulated however digital imagining has made it a lot less time consuming. Following that is a long explanation of pixels, bits, and colors pertaining to photography and digital imaging. Of course no explanation of pixel, bits, and colors would be complete without a brief description of digital photography, which followed it.

Then there was the history and the basics of audio and video. Most of this section was like beeps and clicks to me. However, the psychoacoustic model was interesting. This explains how MP3 cuts out wavelengths of sound that we cannot here in order to make digital music more seamless and faster to download.

Lastly, the reading finishes up with web production. This section gave a good introduction into programming Hypertext Markup Language, or HTML, and everything else one would need to design a web site. It even gets into mundane details of website design like carefully choosing the colors you use in your site, as so that different cultures do not infer the wrong meaning from them.

The text overall was about convergence of technology, and how the Internet can now be used to do things that it never could. The internet technology and technology for playing music, for example, have been brought together so now you can play music on the internet. This is truly amazing because in not that long of time the Internet went from having very little practicality to being able to do so many useful, everyday things. I remember when I first started using the internet, there were no sounds or videos on it, now I can download whole movies and listen to whatever music I choose.

Wednesday, September 10, 2008

free-ridin

The reading for September 10, 2008 dealt with cooperation and conflict in computer communities. It is a widely accepted assumption that computer-mediated communication encourages wide participation, produces a greater level of candor amongst its participants, and emphasizes merit over status. However, there are problems with computer-mediated communication.
To successfully communicate via the internet there must be cooperation. The reading starts by talking about the biggest problem that is associated with cooperation, which is the tendency to behave selfishly. That is to say what is best for an individual leads to a poorer outcome for all (Kollock & Smith, 1996, p. 109).
This leads into the free-problem. The free-rider problem says that whenever one cannot be excluded from the benefits that others provide, each person is motivated not to contribute and to free-ride off the efforts of the others (Kollock & Smith, 1996, p. 110). The problem with this is if no one is contributing to the public good, but everyone is using it, then the public good is going to be exhausted and no one will have it.
The reading then starts talking about the Usenet. The Usenet is the largest computer-mediated communication system in existence and is an excellent example to talk about when discussing cooperation and conflict in computer communities. It first explains how the Usenet works then uses it as an example of the successes and downfalls of computer-mediated communication. Last it talks about what can be down and what is being done to address those downfalls.
I can absolutely relate to the idea of free-riding. I feel that I am a hard worker. However, when I am in a group with a bunch of people that do not pull their own weight it makes me less motivated to do my portion of the work. This was the case this summer where I worked. I was doing energy conservation work and getting paid by the job. Which is I got a set amount of money for the job, no matter how long it took me to complete. So the goal was to get it done as quickly as possible, so to make as much money as possible. Well half way through the summer these two kids that did not know what they were doing joined our crew. Now the pay for the job was getting split four ways and the job took just as long because they did nothing. That in turn made me not want to contribute to the public good, which in our case was money, because I was still doing half the work with the other gentleman and it was taking us the same amount of time but now we had to split the public good four ways. It just was not worth it for me to bust my ass anymore.


Kollock, P., & Smith, M. (1996). Managing the virtual commons: Cooperation and conflict in
computer communities. In Susan C. Herring (Ed.), Computer-mediated communication:
linguistic, social and cross-cultural perspectives(pp. 109- 128). Philadelphia: John
Benjamins.

Tuesday, September 9, 2008

The four greatest Internet innovations

There are millions of people that access the Internet every day. Most of them have no idea how it works, and they take things like sending and receiving email for granted. However, there have been many important innovations that have happened in the last four decades that have given us the Internet as we know it today. The four most important innovations that have led to the Internet as we know it today are packet-switching, TCP, IP, and hypertext. Without these innovations, the Internet would not function as it does today.
To many people, how information gets from one place to another on the Internet is a mystery. They just know that when they type in a search word and press enter, they get the information that they want. To say that it is a little harder than that is an understatement. One innovation that helps a person get the information that they want from the Internet is packet-switching.
All information that is sent over the Internet is broken down into thousands of packets (Chechik & Gati). For instance, when you send an email, it is broken down and sent over the Internet. When the email reaches its destination, it is put back together in its original form. It is much easier and faster to send gravel down a pipe than it is a boulder. This holds true for information over the Internet as well; the smaller the bits of information the faster the message as a whole can reach its destination.
Impressive as that is, that is not the most impressive feature of packet-switching. The individual packets can all be sent over different paths to reach their final destination. As things change over the Internet, mainly traffic, packets can be redirected to areas of less resistance (Adams & Clark).
Packet-switching is very important because it gets information to its destination fast; however packet-switching would not be possible without the transmission control protocol or TCP. TCP is one of the protocols that govern packet-switching. As stated before, packets can travel many different routes to reach their final destination (Loshin). Some of those packets inevitably will get lost. TCP automatically sends word to the source of that information to resend that one packet; this ensures that you will receive the whole message every time.
That is not TCP’s only job. TCP also numbers every packet. That way the information can be successfully put together when it reaches its destination. There is TCP on both the sending and receiving end of a message. The TCP on the sending end of the information numbers the packets while the TCP on the receiving end of the information puts the information back together.
The Internet protocol, or IP, is the second protocol that governs packet-switching. IP is what breaks the information that you want into packets. This was innovative because it dramatically increased the speed at which information could be sent over the Internet by dramatically decreasing the size of the information that was sent.
The IP is also responsible for making sure that the individual packets get where they are going. The IP creates and binds what is called the header to every packet that is sent over the Internet. The header is like a mailing address, it contains information about who and where the packet is from, when the information was sent, who and where the information is going, the subject of the information, and the error information (Adams & Clark). Without the IP, sending information on the internet would be like sending mail without a mailing address; it would never get where it needed to go.
Everyone that has used the Internet has used the last innovation that is going to be discussed, but might not know what it is called. It is called hypertext, and by definition “hypertext is a method of storing data through a computer program that allows a user to create and link fields of information at will and to retrieve the data nonlinearly” (Hypertext, 2003). The creation of hypertext has changed the way everyone navigates the internet. It use to be that when you went into a webpage, email, etc. you had to back out of it or type in a new URL to proceed forward. Now a day if someone receives an email with a hyperlink to some piece of information, that person can click on that hyperlink to go look at that piece of information. Now, while in the new piece of information there is another hyperlink to something else, that person can click on the hyperlink and go there as well, and so on and so forth.
The Internet has evolved very fast since its inception in the 1960’s. People use the Internet everyday for almost every reason. However, without the four main innovations that were discussed in this essay the Internet that we know might not exist. That is why packet-switching, TCP, IP and hypertext are the four most important innovations that led to the Internet as we know it today. These innovations led to the high speed, reliability and ease of use that we have come to enjoy today with the Internet.

Work Cited

Loshin, P. (2003).TCP/IP clearly explained. Boston: Morgan Kaufmann.


Chechik, S., & Gati, A. Packet Switching. Retrieved September 8, 2008, from
http://www2.rad.com/networks/2004/PacketSwitching/main.htm


Adams & Clark

Hypertext. (2003). In The Internet: A historical encyclopedia (vol. 1, p. 15). Bakersfield, CA: John Wiley & Sons Inc.

Wednesday, September 3, 2008

Adams and Clark Ch.2

Even though I have been using the internet since I was ten years old there was a lot of information in chapter two of the Adams and Clark text book that I did not know about the internet. It was especially interesting reading how scholars classify the internet as a medium. It is not just an interpersonal medium because on the internet you can talk to and reach more than just one person at a time. Take a telephone for instance, a telephone is primarily an interpersonal medium. However it has small group medium potential, say if you were in a meeting and you conference someone in over the telephone. Those are the only two medium a telephone can be classified in. Then there is television, which can serve as a mass medium, and can reach many different people.
The internet is unlike any of the other mediums that came before it. The internet has the ability to act as all mediums, and that is why scholars have classified it as macromedium or metamedium. It is considered macromedium because it has the ability to reach a global audience but it can also be used access the smallest bits of personal information that is tailored to an audience of one. It is quite impressive that you can email your next door neighbor about next week’s barbeque while talking to someone in India about why an appliance does not work. Then there is the classification metamedium which means medium of media. This implies how the internet serves as a platform for older media. Now you can go on the internet and get your daily news broadcast, news paper, even make a long distance phone call all in one place. This is interesting to me because even though I have been a user of the internet for many years now, I was still unaware of the full capabilities of the internet.
I also can also appreciate how the internet can be synchronous or asynchronous. When you are talking to someone in person or on the phone you have to have a response to what they just said right away. The conversation has to be synchronous. The opposite extreme to that would be mail. If someone writes you a letter you can take as long as you want to make the letter just right. That is asynchronous. It is nice when you are talking to someone via instant messenger and you can be a little asynchronous with the conversation to make sure what you say is not out a line. I remember when I was in middle school and first started being interested in girls; the asynchronous of the internet was a very helpful feature.