Here is the link to my webpage for Assignment #6.
It isn't particularly fancy, as this was my first time doing HTML since high school, but I'm pretty sure it has all it needs. :)
Tuesday, November 24, 2009
Saturday, November 21, 2009
Week 11 reading notes
Even though I was signed in to the Pitt Library website, I couldn’t access the articles by David Hawking without being prompted to pay for each article, so I wasn’t able to read them.
Shreeves, S. L., Habing, T. O., Hagedorn, K., & Young, J. A. (2005). Current developments and future trends for the OAI protocol for metadata harvesting. Library Trends, 53(4), 576-589.
“The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has been widely adopted since its initial release in 2001. Initially developed as a means to federate access to diverse e-print archives through metadata harvesting and aggregation, the protocol has demonstrated its potential usefulness to a broad range of communities.”
“The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has been widely adopted since its initial release in 2001. Initially developed as a means to federate access to diverse e-print archives through metadata harvesting (Lagoze & Van de Sompel, 2003), the protocol has demonstrated its potential usefulness to a broad range of communities. According to the Experimental OAI Registry at the University of Illinois Library at Urbana–Champaign (UIUC) (Experimental OAI Registry at UIUC, n.d.), there are currently over 300 active data providers using the production version (2.0) of the protocol from a wide variety of domains and institution types. Developers of both open source and commercial content management systems (such as D-Space and CONTENTdm) are including OAI data provider services as part of their products.”
“The OAI world is divided into data providers or repositories, which traditionally make their metadata available through the protocol, and service providers or harvesters, who completely or selectively harvest metadata from data providers, again through the use of the protocol (Lagoze & Van de Sompel, 2001).”
“As the OAI community has matured, and especially as the number of OAI repositories and the number of data sets served by those repositories has grown, it has become increasingly difficult for service providers to discover and effectively utilize the myriad repositories. In order to address this difficulty the OAI research group at UIUC has developed a comprehensive, searchable registry of OAI repositories (Experimental OAI Registry at UIUC, n.d.).”
MICHAEL K. BERGMAN, “The Deep Web: Surfacing Hidden Value” http://www.press.umich.edu/jep/07-01/bergman.html
“Traditional search engines can not "see" or retrieve content in the deep Web — those pages do not exist until they are created dynamically as the result of a specific search. Because traditional search engine crawlers can not probe beneath the surface, the deep Web has heretofore been hidden.
The deep Web is qualitatively different from the surface Web. Deep Web sources store their content in searchable databases that only produce results dynamically in response to a direct request. But a direct query is a "one at a time" laborious way to search. BrightPlanet's search technology automates the process of making dozens of direct queries simultaneously using multiple-thread technology and thus is the only search technology, so far, that is capable of identifying, retrieving, qualifying, classifying, and organizing both "deep" and "surface" content.”
•“Public information on the deep Web is currently 400 to 550 times larger than the commonly defined World Wide Web.
•The deep Web contains 7,500 terabytes of information compared to nineteen terabytes of information in the surface Web.
•The deep Web contains nearly 550 billion individual documents compared to the one billion of the surface Web.
•More than 200,000 deep Web sites presently exist.
•Sixty of the largest deep-Web sites collectively contain about 750 terabytes of information — sufficient by themselves to exceed the size of the surface Web forty times.
•On average, deep Web sites receive fifty per cent greater monthly traffic than surface sites and are more highly linked to than surface sites; however, the typical (median) deep Web site is not well known to the Internet-searching public.
•The deep Web is the largest growing category of new information on the Internet.
•Deep Web sites tend to be narrower, with deeper content, than conventional surface sites.
•Total quality content of the deep Web is 1,000 to 2,000 times greater than that of the surface Web.
•Deep Web content is highly relevant to every information need, market, and domain.
•More than half of the deep Web content resides in topic-specific databases.
•A full ninety-five per cent of the deep Web is publicly accessible information — not subject to fees or subscriptions.”
“It has been said that what cannot be seen cannot be defined, and what is not defined cannot be understood. Such has been the case with the importance of databases to the information content of the Web. And such has been the case with a lack of appreciation for how the older model of crawling static Web pages — today's paradigm for conventional search engines — no longer applies to the information content of the Internet.”
“The sixty known, largest deep Web sites contain data of about 750 terabytes (HTML-included basis) or roughly forty times the size of the known surface Web. These sites appear in a broad array of domains from science to law to images and commerce. We estimate the total number of records or documents within this group to be about eighty-five billion.
Roughly two-thirds of these sites are public ones, representing about 90% of the content available within this group of sixty. The absolutely massive size of the largest sites shown also illustrates the universal power function distribution of sites within the deep Web, not dissimilar to Web site popularity or surface Web sites. One implication of this type of distribution is that there is no real upper size boundary to which sites may grow.”
“Directed query technology is the only means to integrate deep and surface Web information. The information retrieval answer has to involve both "mega" searching of appropriate deep Web sites and "meta" searching of surface Web search engines to overcome their coverage problem. Client-side tools are not universally acceptable because of the need to download the tool and issue effective queries to it. Pre-assembled storehouses for selected content are also possible, but will not be satisfactory for all information requests and needs. Specific vertical market services are already evolving to partially address these challenges. These will likely need to be supplemented with a persistent query system customizable by the user that would set the queries, search sites, filters, and schedules for repeated queries.”
Shreeves, S. L., Habing, T. O., Hagedorn, K., & Young, J. A. (2005). Current developments and future trends for the OAI protocol for metadata harvesting. Library Trends, 53(4), 576-589.
“The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has been widely adopted since its initial release in 2001. Initially developed as a means to federate access to diverse e-print archives through metadata harvesting and aggregation, the protocol has demonstrated its potential usefulness to a broad range of communities.”
“The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has been widely adopted since its initial release in 2001. Initially developed as a means to federate access to diverse e-print archives through metadata harvesting (Lagoze & Van de Sompel, 2003), the protocol has demonstrated its potential usefulness to a broad range of communities. According to the Experimental OAI Registry at the University of Illinois Library at Urbana–Champaign (UIUC) (Experimental OAI Registry at UIUC, n.d.), there are currently over 300 active data providers using the production version (2.0) of the protocol from a wide variety of domains and institution types. Developers of both open source and commercial content management systems (such as D-Space and CONTENTdm) are including OAI data provider services as part of their products.”
“The OAI world is divided into data providers or repositories, which traditionally make their metadata available through the protocol, and service providers or harvesters, who completely or selectively harvest metadata from data providers, again through the use of the protocol (Lagoze & Van de Sompel, 2001).”
“As the OAI community has matured, and especially as the number of OAI repositories and the number of data sets served by those repositories has grown, it has become increasingly difficult for service providers to discover and effectively utilize the myriad repositories. In order to address this difficulty the OAI research group at UIUC has developed a comprehensive, searchable registry of OAI repositories (Experimental OAI Registry at UIUC, n.d.).”
MICHAEL K. BERGMAN, “The Deep Web: Surfacing Hidden Value” http://www.press.umich.edu/jep/07-01/bergman.html
“Traditional search engines can not "see" or retrieve content in the deep Web — those pages do not exist until they are created dynamically as the result of a specific search. Because traditional search engine crawlers can not probe beneath the surface, the deep Web has heretofore been hidden.
The deep Web is qualitatively different from the surface Web. Deep Web sources store their content in searchable databases that only produce results dynamically in response to a direct request. But a direct query is a "one at a time" laborious way to search. BrightPlanet's search technology automates the process of making dozens of direct queries simultaneously using multiple-thread technology and thus is the only search technology, so far, that is capable of identifying, retrieving, qualifying, classifying, and organizing both "deep" and "surface" content.”
•“Public information on the deep Web is currently 400 to 550 times larger than the commonly defined World Wide Web.
•The deep Web contains 7,500 terabytes of information compared to nineteen terabytes of information in the surface Web.
•The deep Web contains nearly 550 billion individual documents compared to the one billion of the surface Web.
•More than 200,000 deep Web sites presently exist.
•Sixty of the largest deep-Web sites collectively contain about 750 terabytes of information — sufficient by themselves to exceed the size of the surface Web forty times.
•On average, deep Web sites receive fifty per cent greater monthly traffic than surface sites and are more highly linked to than surface sites; however, the typical (median) deep Web site is not well known to the Internet-searching public.
•The deep Web is the largest growing category of new information on the Internet.
•Deep Web sites tend to be narrower, with deeper content, than conventional surface sites.
•Total quality content of the deep Web is 1,000 to 2,000 times greater than that of the surface Web.
•Deep Web content is highly relevant to every information need, market, and domain.
•More than half of the deep Web content resides in topic-specific databases.
•A full ninety-five per cent of the deep Web is publicly accessible information — not subject to fees or subscriptions.”
“It has been said that what cannot be seen cannot be defined, and what is not defined cannot be understood. Such has been the case with the importance of databases to the information content of the Web. And such has been the case with a lack of appreciation for how the older model of crawling static Web pages — today's paradigm for conventional search engines — no longer applies to the information content of the Internet.”
“The sixty known, largest deep Web sites contain data of about 750 terabytes (HTML-included basis) or roughly forty times the size of the known surface Web. These sites appear in a broad array of domains from science to law to images and commerce. We estimate the total number of records or documents within this group to be about eighty-five billion.
Roughly two-thirds of these sites are public ones, representing about 90% of the content available within this group of sixty. The absolutely massive size of the largest sites shown also illustrates the universal power function distribution of sites within the deep Web, not dissimilar to Web site popularity or surface Web sites. One implication of this type of distribution is that there is no real upper size boundary to which sites may grow.”
“Directed query technology is the only means to integrate deep and surface Web information. The information retrieval answer has to involve both "mega" searching of appropriate deep Web sites and "meta" searching of surface Web search engines to overcome their coverage problem. Client-side tools are not universally acceptable because of the need to download the tool and issue effective queries to it. Pre-assembled storehouses for selected content are also possible, but will not be satisfactory for all information requests and needs. Specific vertical market services are already evolving to partially address these challenges. These will likely need to be supplemented with a persistent query system customizable by the user that would set the queries, search sites, filters, and schedules for repeated queries.”
Wednesday, November 18, 2009
Week 10 reading notes
Mischo, W. (July/August 2005). Digital Libraries: challenges and influential work. D-Lib Magazine. 11(7/8). http://www.dlib.org/dlib/july05/mischo/07mischo.html
“Effective search and discovery over open and hidden digital resources on the Internet remains a problematic and challenging task. The difficulties are exacerbated by today's greatly distributed scholarly information landscape. This distributed information environment is populated by silos of: full-text repositories maintained by commercial and professional society publishers; preprint servers and Open Archive Initiative (OAI) provider sites; specialized Abstracting and Indexing (A & I) services; publisher and vendor vertical portals; local, regional, and national online catalogs; Web search and metasearch engines; local e-resource registries and digital content databases; campus institutional repository systems; and learning management systems.”
“For years, information providers have focused on developing mechanisms to transform the myriad distributed digital collections into true "digital libraries" with the essential services that are required to make these digital libraries useful to and productive for users. As Lynch and others have pointed out, there is a huge difference between providing access to discrete sets of digital collections and providing digital library services (Lynch, 2002). To address these concerns, information providers have designed enhanced gateway and navigation services on the interface side and also introduced federation mechanisms to assist users through the distributed, heterogeneous information environment. The mantra has been: aggregate, virtually collocate, and federate. The goal of seamless federation across distributed, heterogeneous resources remains the holy grail of digital library work.”
Paepcke, A. et al. (July/August 2005). Dewey meets Turing: librarians, computer scientists and the digital libraries initiative. D-Lib Magazine. 11(7/8). http://www.dlib.org/dlib/july05/paepcke/07paepcke.html
“In 1994 the National Science Foundation launched its Digital Libraries Initiative (DLI). The choice of combining the word digital with library immediately defined three interested parties: librarians, computer scientists, and publishers. The eventual impact of the Initiative reached far beyond these three groups. The Google search engine emerged from the funded work and has changed working styles for virtually all professions and private activities that involve a computer.”
“For computer scientists NSF's DL Initiative provided a framework for exciting new work that was to be informed by the centuries-old discipline and values of librarianship. The scientists had been trained to use libraries since their years of secondary education. They could see, or at least imagine how current library functions would be moved forward by an injection of computing insight.
Digital library projects were for many computer scientists the perfect relief from the tension between conducting 'pure' research and impacting day-to-day society. Computing sciences are called on to continually generate novelty. On the other hand, they experience both their own desire, as well as funders' calls for deep impact on society and neighboring scientific fields. Work on digital libraries promised a perfect resolution of that tension.”
“For librarians the new Initiative was promising from two perspectives. They had observed over the years that the natural sciences were beneficiaries of large grants, while library operations were much more difficult to fund and maintain. The Initiative would finally be a conduit for much needed funds.
Aside from the monetary issues, librarians who involved themselves in the Initiative understood that information technologies were indeed important to ensure libraries' continued impact on scholarly work. Obvious opportunities lay in novel search capabilities, holdings management, and instant access. Online Public Access Catalogs (OPACS) constituted the entirety of digital facilities for many libraries. The partnership with computer science would contribute the expertise that was not yet widely available in the library community.”
“The coalition between the computing and library communities had been anchored in a tacit understanding that even in the 'new' world there would be coherent collections that one would operate on to search, organize, and browse. The collections would include multiple media; they would be larger than current holdings; and access methods would change. But the scene would still include information consumers, producers, and collections. Some strutting computer scientists predicted the end of collection gatekeeping and mediation between collections and their consumers; librarians in response clarified for their sometimes naive computing partners just how much key information is revealed in a reference interview. But other than these maybe occasionally testy exchanges, the common vision of better and more complete holdings prevailed.
The Web not only blurred the distinction between consumers and producers of information, but it dispersed most items that in the aggregate should have been collections across the world and under diverse ownership. This change undermined the common ground that had brought the two disciplines together.”
Lynch, Clifford A. "Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age" ARL, no. 226 (February 2003): 1-7.
The link provided in the syllabus didn’t work, so I had to look up the article through the Pitt Library search - https://sremote.pitt.edu/bm~doc/,DanaInfo=www.arl.org+br226ir.pdf
“The development of institutional repositories emerged as a new strategy that allows universities to apply serious, systematic leverage to accelerate changes taking place in scholarship and scholarly communication, both moving beyond their historic relatively passive role of supporting established publishers in modernizing scholarly publishing through the licensing of digital content, and also scaling up beyond ad-hoc alliances, partnerships, and support arrangements with a few select faculty pioneers exploring more transformative new uses of the digital medium.”
“In my view, a university-based institutional repository is a set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members. It is most essentially an organizational commitment to the stewardship of these digital materials, including long-term preservation where appropriate, as well as organization and access or distribution.”
“At the most basic and fundamental level, an institutional repository is a recognition that the intellectual life and scholarship of our universities will increasingly be represented, documented, and shared in digital form, and that a primary responsibility of our universities is to exercise stewardship over these riches: both to make them available and to preserve them. An institutional repository is the means by which our universities will address this responsibility both to the members of their communities and to the public. It is a new channel for structuring the university's contribution to the broader world, and as such invites policy and cultural reassessment of this relationship.”
“To summarize, institutional repositories can facilitate greatly enhanced access to traditional scholarly content by empowering faculty to effectively use the new dissemination capabilities offered by the network.”
“An institutional repository can fail over time for many reasons: policy (for example, the institution chooses to stop funding it), management failure or incompetence, or technical problems. Any of these failures can result in the disruption of access, or worse, total and permanent loss of material stored in the institutional repository. As we think about institutional repositories today, there is much less redundancy than we have had in our systems of print publication and libraries, so any single institutional failure can cause more damage.”
“I believe that institutional repositories will promote progress in the development and deployment of infrastructure standards in a variety of difficult or neglected areas…
-Preservable Formats
-Identifiers
-Rights Documentation and Management”
“Effective search and discovery over open and hidden digital resources on the Internet remains a problematic and challenging task. The difficulties are exacerbated by today's greatly distributed scholarly information landscape. This distributed information environment is populated by silos of: full-text repositories maintained by commercial and professional society publishers; preprint servers and Open Archive Initiative (OAI) provider sites; specialized Abstracting and Indexing (A & I) services; publisher and vendor vertical portals; local, regional, and national online catalogs; Web search and metasearch engines; local e-resource registries and digital content databases; campus institutional repository systems; and learning management systems.”
“For years, information providers have focused on developing mechanisms to transform the myriad distributed digital collections into true "digital libraries" with the essential services that are required to make these digital libraries useful to and productive for users. As Lynch and others have pointed out, there is a huge difference between providing access to discrete sets of digital collections and providing digital library services (Lynch, 2002). To address these concerns, information providers have designed enhanced gateway and navigation services on the interface side and also introduced federation mechanisms to assist users through the distributed, heterogeneous information environment. The mantra has been: aggregate, virtually collocate, and federate. The goal of seamless federation across distributed, heterogeneous resources remains the holy grail of digital library work.”
Paepcke, A. et al. (July/August 2005). Dewey meets Turing: librarians, computer scientists and the digital libraries initiative. D-Lib Magazine. 11(7/8). http://www.dlib.org/dlib/july05/paepcke/07paepcke.html
“In 1994 the National Science Foundation launched its Digital Libraries Initiative (DLI). The choice of combining the word digital with library immediately defined three interested parties: librarians, computer scientists, and publishers. The eventual impact of the Initiative reached far beyond these three groups. The Google search engine emerged from the funded work and has changed working styles for virtually all professions and private activities that involve a computer.”
“For computer scientists NSF's DL Initiative provided a framework for exciting new work that was to be informed by the centuries-old discipline and values of librarianship. The scientists had been trained to use libraries since their years of secondary education. They could see, or at least imagine how current library functions would be moved forward by an injection of computing insight.
Digital library projects were for many computer scientists the perfect relief from the tension between conducting 'pure' research and impacting day-to-day society. Computing sciences are called on to continually generate novelty. On the other hand, they experience both their own desire, as well as funders' calls for deep impact on society and neighboring scientific fields. Work on digital libraries promised a perfect resolution of that tension.”
“For librarians the new Initiative was promising from two perspectives. They had observed over the years that the natural sciences were beneficiaries of large grants, while library operations were much more difficult to fund and maintain. The Initiative would finally be a conduit for much needed funds.
Aside from the monetary issues, librarians who involved themselves in the Initiative understood that information technologies were indeed important to ensure libraries' continued impact on scholarly work. Obvious opportunities lay in novel search capabilities, holdings management, and instant access. Online Public Access Catalogs (OPACS) constituted the entirety of digital facilities for many libraries. The partnership with computer science would contribute the expertise that was not yet widely available in the library community.”
“The coalition between the computing and library communities had been anchored in a tacit understanding that even in the 'new' world there would be coherent collections that one would operate on to search, organize, and browse. The collections would include multiple media; they would be larger than current holdings; and access methods would change. But the scene would still include information consumers, producers, and collections. Some strutting computer scientists predicted the end of collection gatekeeping and mediation between collections and their consumers; librarians in response clarified for their sometimes naive computing partners just how much key information is revealed in a reference interview. But other than these maybe occasionally testy exchanges, the common vision of better and more complete holdings prevailed.
The Web not only blurred the distinction between consumers and producers of information, but it dispersed most items that in the aggregate should have been collections across the world and under diverse ownership. This change undermined the common ground that had brought the two disciplines together.”
Lynch, Clifford A. "Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age" ARL, no. 226 (February 2003): 1-7.
The link provided in the syllabus didn’t work, so I had to look up the article through the Pitt Library search - https://sremote.pitt.edu/bm~doc/,DanaInfo=www.arl.org+br226ir.pdf
“The development of institutional repositories emerged as a new strategy that allows universities to apply serious, systematic leverage to accelerate changes taking place in scholarship and scholarly communication, both moving beyond their historic relatively passive role of supporting established publishers in modernizing scholarly publishing through the licensing of digital content, and also scaling up beyond ad-hoc alliances, partnerships, and support arrangements with a few select faculty pioneers exploring more transformative new uses of the digital medium.”
“In my view, a university-based institutional repository is a set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members. It is most essentially an organizational commitment to the stewardship of these digital materials, including long-term preservation where appropriate, as well as organization and access or distribution.”
“At the most basic and fundamental level, an institutional repository is a recognition that the intellectual life and scholarship of our universities will increasingly be represented, documented, and shared in digital form, and that a primary responsibility of our universities is to exercise stewardship over these riches: both to make them available and to preserve them. An institutional repository is the means by which our universities will address this responsibility both to the members of their communities and to the public. It is a new channel for structuring the university's contribution to the broader world, and as such invites policy and cultural reassessment of this relationship.”
“To summarize, institutional repositories can facilitate greatly enhanced access to traditional scholarly content by empowering faculty to effectively use the new dissemination capabilities offered by the network.”
“An institutional repository can fail over time for many reasons: policy (for example, the institution chooses to stop funding it), management failure or incompetence, or technical problems. Any of these failures can result in the disruption of access, or worse, total and permanent loss of material stored in the institutional repository. As we think about institutional repositories today, there is much less redundancy than we have had in our systems of print publication and libraries, so any single institutional failure can cause more damage.”
“I believe that institutional repositories will promote progress in the development and deployment of infrastructure standards in a variety of difficult or neglected areas…
-Preservable Formats
-Identifiers
-Rights Documentation and Management”
Sunday, November 8, 2009
Wednesday, October 21, 2009
Week 8 reading notes
W3schools HTML Tutorial
(http://www.w3schools.com/HTML/html_intro.asp)
This was a tutorial about the basics of HTML and how websites are built using it. It explained everything step by step and was clearly aimed toward beginners, so I was able to get a lot out of it. As has been the case with past readings, a lot of the content I knew already but had a hard time putting together in my head. I knew the absolute basics of HTML from posting in blogs and things before, but this tutorial brought it all together in terms I was able to grasp quickly. I thought it tended to repeat itself a little, but overall it seems like a really good resource for beginning to build webpages, and I’ll probably be using it sometime soon in the future.
HTML Cheat Sheet
(http://webmonkey.wired.com/webmonkey/reference/html_cheatsheet/)
I got a 404 Error when I tried to view this page, and nothing came up when I tried the search feature for the website, so I was never able to view it.
W3 School Cascading Style Sheet Tutorial
(http://www.w3schools.com/css/)
I know a lot less about CSS than about basic HTML, so here are some notes (I put spaces between tags so it wouldn't show up in Blogger):
“What is CSS?
• CSS stands for Cascading Style Sheets
• Styles define how to display HTML elements
• Styles were added to HTML 4.0 to solve a problem
• External Style Sheets can save a lot of work
• External Style Sheets are stored in CSS files
Styles Solved a Big Problem
• HTML was never intended to contain tags for formatting a document.
• HTML was intended to define the content of a document, like:
o < h1 > This is a heading < /h1 >
o < p > This is a paragraph. < /p >
• When tags like < span >, and color attributes were added to the HTML 3.2 specification, it started a nightmare for web developers. Development of large web sites, where fonts and color information were added to every single page, became a long and expensive process.
• To solve this problem, the World Wide Web Consortium (W3C) created CSS.
• In HTML 4.0, all formatting could be removed from the HTML document, and stored in a separate CSS file.
• All browsers support CSS today.
CSS defines HOW HTML elements are to be displayed.
Styles are normally saved in external .css files. External style sheets enable you to change the appearance and layout of all the pages in a Web site, just by editing one single file!”
This seemed a lot more complicated and harder for me to understand, but it was interesting to learn how it came about as a solution for difficulties with newer versions of HTML. This was the same kind of step-by-step tutorial as the HTML one, using examples and giving you the opportunity to try each step of the process after every example.
(http://www.w3schools.com/HTML/html_intro.asp)
This was a tutorial about the basics of HTML and how websites are built using it. It explained everything step by step and was clearly aimed toward beginners, so I was able to get a lot out of it. As has been the case with past readings, a lot of the content I knew already but had a hard time putting together in my head. I knew the absolute basics of HTML from posting in blogs and things before, but this tutorial brought it all together in terms I was able to grasp quickly. I thought it tended to repeat itself a little, but overall it seems like a really good resource for beginning to build webpages, and I’ll probably be using it sometime soon in the future.
HTML Cheat Sheet
(http://webmonkey.wired.com/webmonkey/reference/html_cheatsheet/)
I got a 404 Error when I tried to view this page, and nothing came up when I tried the search feature for the website, so I was never able to view it.
W3 School Cascading Style Sheet Tutorial
(http://www.w3schools.com/css/)
I know a lot less about CSS than about basic HTML, so here are some notes (I put spaces between tags so it wouldn't show up in Blogger):
“What is CSS?
• CSS stands for Cascading Style Sheets
• Styles define how to display HTML elements
• Styles were added to HTML 4.0 to solve a problem
• External Style Sheets can save a lot of work
• External Style Sheets are stored in CSS files
Styles Solved a Big Problem
• HTML was never intended to contain tags for formatting a document.
• HTML was intended to define the content of a document, like:
o < h1 > This is a heading < /h1 >
o < p > This is a paragraph. < /p >
• When tags like < span >, and color attributes were added to the HTML 3.2 specification, it started a nightmare for web developers. Development of large web sites, where fonts and color information were added to every single page, became a long and expensive process.
• To solve this problem, the World Wide Web Consortium (W3C) created CSS.
• In HTML 4.0, all formatting could be removed from the HTML document, and stored in a separate CSS file.
• All browsers support CSS today.
CSS defines HOW HTML elements are to be displayed.
Styles are normally saved in external .css files. External style sheets enable you to change the appearance and layout of all the pages in a Web site, just by editing one single file!”
This seemed a lot more complicated and harder for me to understand, but it was interesting to learn how it came about as a solution for difficulties with newer versions of HTML. This was the same kind of step-by-step tutorial as the HTML one, using examples and giving you the opportunity to try each step of the process after every example.
Tuesday, October 13, 2009
Assignment #4
The topic I chose was how to make a LOLcat from www.icanhascheezburger.com.
Link to video
Links to annotated images:
(One) (Two) (Three) (Four) (Five)
Link to video
Links to annotated images:
(One) (Two) (Three) (Four) (Five)
Monday, October 12, 2009
Week 7 comments on others' blogs
http://suzydeucher2600.blogspot.com/2009/10/reading-notes-week-7.html?showComment=1255377882981#c6947361342896832011
http://knivesnmatches.blogspot.com/2009/10/1020-readings.html?showComment=1255385910228#c6308310386173832883
http://knivesnmatches.blogspot.com/2009/10/1020-readings.html?showComment=1255385910228#c6308310386173832883
Sunday, October 11, 2009
Week 7 reading notes
How Internet Infrastructure Works
By Jeff Tyson
(http://computer.howstuffworks.com/internet-infrastructure.htm/printable)
This article basically explained in simple terms how the internet works, how it is essentially a series of connected networks. It talked about how companies attach their own networks to the internet, and how internet networks rely on Network Access Points (NAPs), backbones, and routers to communicate with each other. It went into details about how routers work, and what their purpose is – making sure information gets to where it needs to be, and keeping information from going where it isn’t needed. This is how information is sent to separate networks – it joins two networks together, passing information from one to the next.
More quotes:
“Internet Backbone:
Today there are many companies that operate their own high-capacity backbones, and all of them interconnect at various NAPs around the world. In this way, everyone on the Internet, no matter where they are and what company they use, is able to talk to everyone else on the planet. The entire Internet is a gigantic, sprawling agreement between companies to intercommunicate freely.”
“The IP stands for Internet Protocol, which is the language that computers use to communicate over the Internet. A protocol is the pre-defined way that someone who wants to use a service talks with that service. The "someone" could be a person, but more often it is a computer program like a Web browser.”
“In 1983, the University of Wisconsin created the Domain Name System (DNS), which maps text names to IP addresses automatically. This way you only need to remember www.howstuffworks.com, for example, instead of HowStuffWorks.com's IP address.”
“Internet servers make the Internet possible. All of the machines on the Internet are either servers or clients. The machines that provide services to other machines are servers. And the machines that are used to connect to those services are clients. There are Web servers, e-mail servers, FTP servers and so on serving the needs of Internet users all over the world.”
“Any server machine makes its services available using numbered ports -- one for each service that is available on the server.”
“Once a client has connected to a service on a particular port, it accesses the service using a specific protocol. Protocols are often text and simply describe how the client and server will have their conversation. Every Web server on the Internet conforms to the hypertext transfer protocol (HTTP).”
I liked how this article was apparently aimed toward readers without possibly a lot of knowledge about the terms and processes. How the internet works seems pretty abstract to me – it’s not a physical thing like a letter being mailed from one place to another, it’s information being sent invisibly across great amounts of space in small amounts of time. If you’re like me and still have a hard time wrapping your head around that huge concept, it seems almost like magic – clicking on things or typing in words, making things appear on your screen intangibly and instantaneously from nothingness. This article did a good job of describing how exactly the internet works to retrieve and communicate information from network to network without getting too bogged down in technical terms that might become confusing.
*&*
Andrew K. Pace. “Dismantling Integrated Library Systems,” Library Journal, vol 129 Issue 2, p34-36. 2/1/2004
(https://sremote.pitt.edu/ehost/,DanaInfo=web.ebscohost.com+detail?vid=2&hid=105&sid=3f9c9661-29b4-4cd7-be06-47e0a0b294c7%40sessionmgr104&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=hch&AN=12125485)
The link in the syllabus didn’t work, but I was able to find the article through Pitt’s ULS article search.
This article talked about how libraries are dismantling older systems and creating new ones “out of frustration with the inflexible and nonextensible technology of their proprietary systems.”
More quotes:
“Librarians and their vendors have created a tougher world for themselves, with interoperability the only solution.”
“In the newly dismantled library system, many expect that new modules will communicate with old ones, products from different vendors will work together, and a suite of existing standards will make distributed systems seem transparently whole. But in an ironic twist, most of the touted interoperability is between a vendor's own modules (sometimes) or between a library's homegrown solutions and its own ILS (sometimes). Today, interoperability in library automation is more myth than reality. Some of us wonder if we may lose more than we gain in this newly dismantled world.”
“Whenever one tinkers with either the back or front end of such a sophisticated system, there is a temptation to start from scratch. This can be daunting, even crippling…. Not only is creating a completely new ILS unrealistic, but Roland Dietz, Endeavor's president and CEO, suggests that even "incremental functionality improvements [to existing systems] are more and more expensive." Moreover, libraries no longer want to search myriad information silos but desire one-stop search and retrieval.”
“Librarians are also motivated to seek solutions because of healthy competition with peers and disparate information resources. When libraries try to meet new needs with technology, such as federated searching, their ILS can rarely answer the call. Libraries are forced to look at new technology and create a solution themselves or purchase a standalone product.”
“Libraries don't pay enough for their ILS. Compared with fees for other technologies--relational database management systems, server hardware and software, desktop replacement cycles--ILS maintenance fees are cheap. However, librarians' resistance to paying for development is often cited for the lack of technological advancement within the traditional ILS.”
“Some of the best ideas in online library services have come not from vendors but from librarians themselves… Open source software (OSS) has offered libraries the freedom to experiment with, develop, and offer innovative services. Nonetheless, a full-scale OSS library system that would work for the largest institutions has yet to emerge. Efforts like Koha have success with only the most basic functionality.”
“Our future, like our past, lies in integration. Maintaining standalone modules with loosely integrated or moderately interoperable functions is too expensive for libraries. This is why libraries sought integrated systems in the first place.”
“Library vendors have two choices. They can continue to maintain large systems that use proprietary methods of interoperability and promise tight integration of services for their customers. Or, they can choose to dismantle their modules in such a way that librarians can reintegrate their systems through web services and standards, combining new with the old modules as well as the new with each other.”
*&*
Sergey Brin and Larry Page: Inside the Google machine.
(http://www.ted.com/index.php/talks/sergey_brin_and_larry_page_on_google.html)
This was a video of a talk with Sergey Brin and Larry Page, the creators of Google. They begin by showing a model of the earth and how strong internet activity was in various places in the world during a particular time of day. Also how different areas of the world are wired to each other through internet activity, the strongest being across the United States and from North America to Europe.
Sergey Brin says that the way to expand the Google company is to get more searches, and he talks a little about the Google Foundation and what organizations they were involved in. Larry Page talks about some projects of Google like Googlette, and also different innovations of Google like Google Deskbar, Google Answers, Froogle, and Blogger. He also talks about AdSense, which puts relevant ads on websites instead of random ads, so it’s a little more useful to the reader and generates more money for the author. He gives the example of people generally thinking Google is smart when it isn’t really, it’s just programmed to give automatic answers based on the content of a page. Also how algorithms were giving people offensive responses that seemed like they were being written by real people, when it was really an automatic response the algorithm gave based on the person’s blog’s content.
I thought it was very interesting and funny, and a good look at the inner workings of the Google company.
By Jeff Tyson
(http://computer.howstuffworks.com/internet-infrastructure.htm/printable)
This article basically explained in simple terms how the internet works, how it is essentially a series of connected networks. It talked about how companies attach their own networks to the internet, and how internet networks rely on Network Access Points (NAPs), backbones, and routers to communicate with each other. It went into details about how routers work, and what their purpose is – making sure information gets to where it needs to be, and keeping information from going where it isn’t needed. This is how information is sent to separate networks – it joins two networks together, passing information from one to the next.
More quotes:
“Internet Backbone:
Today there are many companies that operate their own high-capacity backbones, and all of them interconnect at various NAPs around the world. In this way, everyone on the Internet, no matter where they are and what company they use, is able to talk to everyone else on the planet. The entire Internet is a gigantic, sprawling agreement between companies to intercommunicate freely.”
“The IP stands for Internet Protocol, which is the language that computers use to communicate over the Internet. A protocol is the pre-defined way that someone who wants to use a service talks with that service. The "someone" could be a person, but more often it is a computer program like a Web browser.”
“In 1983, the University of Wisconsin created the Domain Name System (DNS), which maps text names to IP addresses automatically. This way you only need to remember www.howstuffworks.com, for example, instead of HowStuffWorks.com's IP address.”
“Internet servers make the Internet possible. All of the machines on the Internet are either servers or clients. The machines that provide services to other machines are servers. And the machines that are used to connect to those services are clients. There are Web servers, e-mail servers, FTP servers and so on serving the needs of Internet users all over the world.”
“Any server machine makes its services available using numbered ports -- one for each service that is available on the server.”
“Once a client has connected to a service on a particular port, it accesses the service using a specific protocol. Protocols are often text and simply describe how the client and server will have their conversation. Every Web server on the Internet conforms to the hypertext transfer protocol (HTTP).”
I liked how this article was apparently aimed toward readers without possibly a lot of knowledge about the terms and processes. How the internet works seems pretty abstract to me – it’s not a physical thing like a letter being mailed from one place to another, it’s information being sent invisibly across great amounts of space in small amounts of time. If you’re like me and still have a hard time wrapping your head around that huge concept, it seems almost like magic – clicking on things or typing in words, making things appear on your screen intangibly and instantaneously from nothingness. This article did a good job of describing how exactly the internet works to retrieve and communicate information from network to network without getting too bogged down in technical terms that might become confusing.
*&*
Andrew K. Pace. “Dismantling Integrated Library Systems,” Library Journal, vol 129 Issue 2, p34-36. 2/1/2004
(https://sremote.pitt.edu/ehost/,DanaInfo=web.ebscohost.com+detail?vid=2&hid=105&sid=3f9c9661-29b4-4cd7-be06-47e0a0b294c7%40sessionmgr104&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=hch&AN=12125485)
The link in the syllabus didn’t work, but I was able to find the article through Pitt’s ULS article search.
This article talked about how libraries are dismantling older systems and creating new ones “out of frustration with the inflexible and nonextensible technology of their proprietary systems.”
More quotes:
“Librarians and their vendors have created a tougher world for themselves, with interoperability the only solution.”
“In the newly dismantled library system, many expect that new modules will communicate with old ones, products from different vendors will work together, and a suite of existing standards will make distributed systems seem transparently whole. But in an ironic twist, most of the touted interoperability is between a vendor's own modules (sometimes) or between a library's homegrown solutions and its own ILS (sometimes). Today, interoperability in library automation is more myth than reality. Some of us wonder if we may lose more than we gain in this newly dismantled world.”
“Whenever one tinkers with either the back or front end of such a sophisticated system, there is a temptation to start from scratch. This can be daunting, even crippling…. Not only is creating a completely new ILS unrealistic, but Roland Dietz, Endeavor's president and CEO, suggests that even "incremental functionality improvements [to existing systems] are more and more expensive." Moreover, libraries no longer want to search myriad information silos but desire one-stop search and retrieval.”
“Librarians are also motivated to seek solutions because of healthy competition with peers and disparate information resources. When libraries try to meet new needs with technology, such as federated searching, their ILS can rarely answer the call. Libraries are forced to look at new technology and create a solution themselves or purchase a standalone product.”
“Libraries don't pay enough for their ILS. Compared with fees for other technologies--relational database management systems, server hardware and software, desktop replacement cycles--ILS maintenance fees are cheap. However, librarians' resistance to paying for development is often cited for the lack of technological advancement within the traditional ILS.”
“Some of the best ideas in online library services have come not from vendors but from librarians themselves… Open source software (OSS) has offered libraries the freedom to experiment with, develop, and offer innovative services. Nonetheless, a full-scale OSS library system that would work for the largest institutions has yet to emerge. Efforts like Koha have success with only the most basic functionality.”
“Our future, like our past, lies in integration. Maintaining standalone modules with loosely integrated or moderately interoperable functions is too expensive for libraries. This is why libraries sought integrated systems in the first place.”
“Library vendors have two choices. They can continue to maintain large systems that use proprietary methods of interoperability and promise tight integration of services for their customers. Or, they can choose to dismantle their modules in such a way that librarians can reintegrate their systems through web services and standards, combining new with the old modules as well as the new with each other.”
*&*
Sergey Brin and Larry Page: Inside the Google machine.
(http://www.ted.com/index.php/talks/sergey_brin_and_larry_page_on_google.html)
This was a video of a talk with Sergey Brin and Larry Page, the creators of Google. They begin by showing a model of the earth and how strong internet activity was in various places in the world during a particular time of day. Also how different areas of the world are wired to each other through internet activity, the strongest being across the United States and from North America to Europe.
Sergey Brin says that the way to expand the Google company is to get more searches, and he talks a little about the Google Foundation and what organizations they were involved in. Larry Page talks about some projects of Google like Googlette, and also different innovations of Google like Google Deskbar, Google Answers, Froogle, and Blogger. He also talks about AdSense, which puts relevant ads on websites instead of random ads, so it’s a little more useful to the reader and generates more money for the author. He gives the example of people generally thinking Google is smart when it isn’t really, it’s just programmed to give automatic answers based on the content of a page. Also how algorithms were giving people offensive responses that seemed like they were being written by real people, when it was really an automatic response the algorithm gave based on the person’s blog’s content.
I thought it was very interesting and funny, and a good look at the inner workings of the Google company.
Tuesday, October 6, 2009
Assignment #3
Link to my CiteULike Library:
http://www.citeulike.org/user/asea85
My topics were: Elizabethan Theater, Homer's Odyssey, and the Russian Revolution.
Articles from Google Scholar/Zotero are tagged "googlescholar" and "zotero", and articles from CiteULike are tagged "fromciteulike".
http://www.citeulike.org/user/asea85
My topics were: Elizabethan Theater, Homer's Odyssey, and the Russian Revolution.
Articles from Google Scholar/Zotero are tagged "googlescholar" and "zotero", and articles from CiteULike are tagged "fromciteulike".
Week 5 comments on others' blogs
http://mdelielis2600response.blogspot.com/2009/10/week-5-rfid-technology-in-libraries.html
Saturday, October 3, 2009
Week 6 reading notes
Local Area Network
(http://en.wikipedia.org/wiki/Local_Area_Network)
“A local area network (LAN) is a computer network covering a small physical area, like a home, office, or small group of buildings, such as a school, or an airport. The defining characteristics of LANs, in contrast to wide-area networks (WANs), include their usually higher data-transfer rates, smaller geographic place, and lack of a need for leased telecommunication lines.”
“Ethernet was developed at Xerox PARC in 1973–1975, and filed as U.S. Patent 4,063,220. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published their seminal paper, "Ethernet: Distributed Packet-Switching For Local Computer Networks."
ARCNET was developed by Datapoint Corporation in 1976 and announced in 1977. It had the first commercial installation in December 1977 at Chase Manhattan Bank in New York.”
“The development and proliferation of CP/M-based personal computers from the late 1970s and then DOS-based personal computers from 1981 meant that a single site began to have dozens or even hundreds of computers. The initial attraction of networking these was generally to share disk space and laser printers, which were both very expensive at the time.”
- Novell NetWare – 1983-mid 90s
- Windows NT and Windows for Workgroups – mid 90s-onward
This article had useful information about LANs and what they were used for, what their history was, and examples of types of LANs that are used now and were used in the past.
Computer Network
(http://en.wikipedia.org/wiki/Computer_network)
“A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics.”
“A computer network allows computers to communicate with many other and to share resources and information. The Advanced Research Projects Agency (ARPA) funded the design of the "Advanced Research Projects Agency Network" (ARPANET) for the United States Department of Defense. It was the first operational computer network in the world. Development of the network began in 1969, based on designs begun in the 1960s.”
“Computer networks can also be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as Optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hn. Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers.”
“Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium.”
Wired Technologies
Twisted-Pair Wire
Coaxial Cable
Fiber Optics
Wireless Technologies
Terrestrial Microwave
Communications Satellites
Cellular and PCS Systems
Wireless LANs
Bluetooth
The Wireless Web
“Networks are often classified as Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area Network (MAN), Personal Area Network (PAN), Virtual Private Network (VPN), Campus Area Network (CAN), Storage Area Network (SAN), etc. depending on their scale, scope and purpose.”
“An Internetwork is the connection of two or more distinct computer networks or network segments via a common routing technology. The result is called an internetwork (often shortened to internet).”
“All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers.”
This article had more information on other types of networks besides LAN. It told a little bit about the history of computer networks (starting with ARPANET, the precursor to the Internet in the 1960s). it also talked about hardware and software different networks used and how they are used to classify different types of networks. It described different types of wired and wireless technologies used in different networks, and all of the different types of networks there are and how they are used. One thing I didn’t know that I thought was interesting was that the name “Internet” comes from the term “Internetwork”, which is the connection of two or more computer networks through a common routing technology. I’ve used the term “Internet” for so long that I never stopped to think where the name might actually come from, and what it all meant in simple terms.
YouTube – common types of computer networks
(http://www.youtube.com/watch?v=1dpgqDdfUjQ)
This short video basically talked about different types of networks that were explained in the Wikipedia article above. He talked about which networks were the most common and how they are used differently. He explained that the most common type of network is the Personal Area Network, which is basically things connected to a single computer, like a printer, copier, scanner, etc. I found PANs to be the most interesting, as I had never really thought about devices connected to a computer being considered a network themselves before. I guess when I think of the term “network” I think more of computers being connected to each other rather than devices being connected to a single computer.
Management of RFID in Libraries
Karen Coyle
(https://sremote.pitt.edu/,DanaInfo=ejournals.ebsco.com+Direct.asp?AccessToken=9I55X5D8X9MZKZRZDPXM5X9U4ZD98I4X5&Show=Object&msid=931959202)
“Briefly, the RF in RFID stands for “radio frequency”; the “ID” means “identifier.” The tag itself consists of a computer chip and an antenna, often printed on paper or some other flexible medium. The shortest metaphor is that RFID is like a barcode but is read with an electromagnetic field rather than by a laser beam.”
“In considering the introduction of any technology into the library we need to ask ourselves “why?” What is the motivation for libraries to embrace new technologies? The answer to this question may be fairly simple: libraries use new technologies because the conditions in the general environment that led to the development of the technology are also the conditions in which the library operates.”
“There is, however, a key difference to the library's inventory as compared to that of a warehouse or retail outlet. In the warehouse and retail supply chain, goods come in, and then they leave. Only occasionally do they return. The retail sector is looking at RFID as a “throw-away” technology that gets an item to a customer and then is discarded. Yet the per item cost of including an RFID tag is much more than the cost of printing a barcode on a package. In libraries, items are taken out and returned many times. This makes the library function an even better use of RFID than in retail because the same RFID tag is re-used many times.”
“Second only to circulation, libraries look to RFID as a security mechanism…. Although RFID can be used in library anti-theft systems, this does not mean that it is a highly secure technology…. The reason to use RFID for security is not because it is especially good for it, but because it is no worse than other security technologies.”
“This is an area where RFID can provide great advantages because the tags can be read while the books sit on the shelf. Not only does the cost of doing an inventory of the library go down, the odds of actually completing regular inventories goes up. This is one of those areas where a new technology will allow the library to do more rather than just doing the same functions with greater efficiency.”
This article had lots of information on RFIDs and how they are already used in other capacities, and how they can be used in libraries. She gives a good argument about why RFIDs can be very useful to libraries – they are more practical with libraries than with items of a warehouse or in retail, since a library’s resources are used multiple times compared to a retail item that is generally bought once and doesn’t come back. It also works as a security mechanism, alerting the library at any time to where an item is if it isn’t where it’s supposed to be. She notes that it might not be highly secure, as there are ways to remove it from a book or block the signal using mylar or aluminum, but it isn’t worse than any other type of security measure. It also is extremely useful when doing inventory, since with bar codes the books need to be opened and scanned, while with RFID technology they can stay on the shelf and be read without being moved.
(http://en.wikipedia.org/wiki/Local_Area_Network)
“A local area network (LAN) is a computer network covering a small physical area, like a home, office, or small group of buildings, such as a school, or an airport. The defining characteristics of LANs, in contrast to wide-area networks (WANs), include their usually higher data-transfer rates, smaller geographic place, and lack of a need for leased telecommunication lines.”
“Ethernet was developed at Xerox PARC in 1973–1975, and filed as U.S. Patent 4,063,220. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published their seminal paper, "Ethernet: Distributed Packet-Switching For Local Computer Networks."
ARCNET was developed by Datapoint Corporation in 1976 and announced in 1977. It had the first commercial installation in December 1977 at Chase Manhattan Bank in New York.”
“The development and proliferation of CP/M-based personal computers from the late 1970s and then DOS-based personal computers from 1981 meant that a single site began to have dozens or even hundreds of computers. The initial attraction of networking these was generally to share disk space and laser printers, which were both very expensive at the time.”
- Novell NetWare – 1983-mid 90s
- Windows NT and Windows for Workgroups – mid 90s-onward
This article had useful information about LANs and what they were used for, what their history was, and examples of types of LANs that are used now and were used in the past.
Computer Network
(http://en.wikipedia.org/wiki/Computer_network)
“A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics.”
“A computer network allows computers to communicate with many other and to share resources and information. The Advanced Research Projects Agency (ARPA) funded the design of the "Advanced Research Projects Agency Network" (ARPANET) for the United States Department of Defense. It was the first operational computer network in the world. Development of the network began in 1969, based on designs begun in the 1960s.”
“Computer networks can also be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as Optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hn. Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers.”
“Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium.”
Wired Technologies
Twisted-Pair Wire
Coaxial Cable
Fiber Optics
Wireless Technologies
Terrestrial Microwave
Communications Satellites
Cellular and PCS Systems
Wireless LANs
Bluetooth
The Wireless Web
“Networks are often classified as Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area Network (MAN), Personal Area Network (PAN), Virtual Private Network (VPN), Campus Area Network (CAN), Storage Area Network (SAN), etc. depending on their scale, scope and purpose.”
“An Internetwork is the connection of two or more distinct computer networks or network segments via a common routing technology. The result is called an internetwork (often shortened to internet).”
“All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers.”
This article had more information on other types of networks besides LAN. It told a little bit about the history of computer networks (starting with ARPANET, the precursor to the Internet in the 1960s). it also talked about hardware and software different networks used and how they are used to classify different types of networks. It described different types of wired and wireless technologies used in different networks, and all of the different types of networks there are and how they are used. One thing I didn’t know that I thought was interesting was that the name “Internet” comes from the term “Internetwork”, which is the connection of two or more computer networks through a common routing technology. I’ve used the term “Internet” for so long that I never stopped to think where the name might actually come from, and what it all meant in simple terms.
YouTube – common types of computer networks
(http://www.youtube.com/watch?v=1dpgqDdfUjQ)
This short video basically talked about different types of networks that were explained in the Wikipedia article above. He talked about which networks were the most common and how they are used differently. He explained that the most common type of network is the Personal Area Network, which is basically things connected to a single computer, like a printer, copier, scanner, etc. I found PANs to be the most interesting, as I had never really thought about devices connected to a computer being considered a network themselves before. I guess when I think of the term “network” I think more of computers being connected to each other rather than devices being connected to a single computer.
Management of RFID in Libraries
Karen Coyle
(https://sremote.pitt.edu/,DanaInfo=ejournals.ebsco.com+Direct.asp?AccessToken=9I55X5D8X9MZKZRZDPXM5X9U4ZD98I4X5&Show=Object&msid=931959202)
“Briefly, the RF in RFID stands for “radio frequency”; the “ID” means “identifier.” The tag itself consists of a computer chip and an antenna, often printed on paper or some other flexible medium. The shortest metaphor is that RFID is like a barcode but is read with an electromagnetic field rather than by a laser beam.”
“In considering the introduction of any technology into the library we need to ask ourselves “why?” What is the motivation for libraries to embrace new technologies? The answer to this question may be fairly simple: libraries use new technologies because the conditions in the general environment that led to the development of the technology are also the conditions in which the library operates.”
“There is, however, a key difference to the library's inventory as compared to that of a warehouse or retail outlet. In the warehouse and retail supply chain, goods come in, and then they leave. Only occasionally do they return. The retail sector is looking at RFID as a “throw-away” technology that gets an item to a customer and then is discarded. Yet the per item cost of including an RFID tag is much more than the cost of printing a barcode on a package. In libraries, items are taken out and returned many times. This makes the library function an even better use of RFID than in retail because the same RFID tag is re-used many times.”
“Second only to circulation, libraries look to RFID as a security mechanism…. Although RFID can be used in library anti-theft systems, this does not mean that it is a highly secure technology…. The reason to use RFID for security is not because it is especially good for it, but because it is no worse than other security technologies.”
“This is an area where RFID can provide great advantages because the tags can be read while the books sit on the shelf. Not only does the cost of doing an inventory of the library go down, the odds of actually completing regular inventories goes up. This is one of those areas where a new technology will allow the library to do more rather than just doing the same functions with greater efficiency.”
This article had lots of information on RFIDs and how they are already used in other capacities, and how they can be used in libraries. She gives a good argument about why RFIDs can be very useful to libraries – they are more practical with libraries than with items of a warehouse or in retail, since a library’s resources are used multiple times compared to a retail item that is generally bought once and doesn’t come back. It also works as a security mechanism, alerting the library at any time to where an item is if it isn’t where it’s supposed to be. She notes that it might not be highly secure, as there are ways to remove it from a book or block the signal using mylar or aluminum, but it isn’t worse than any other type of security measure. It also is extremely useful when doing inventory, since with bar codes the books need to be opened and scanned, while with RFID technology they can stay on the shelf and be read without being moved.
Muddiest Point for Week 5
We learned Raster images are bitmaps and other compressed forms like GIF, JPEG, TIFF, PNG, etc. But what are Vector images called, and why aren't Vector images used more often if their resolution is so much better when the image is enlarged?
Friday, September 25, 2009
Muddiest Point for Week 4
Just realized I didn't post last week saying I didn't have a muddiest point. Oops... Well, I don't have one this week either. :)
Wednesday, September 23, 2009
Week 5 reading notes
Data compression
(http://en.wikipedia.org/wiki/Data_compression)
“Data compression or source coding is the process of encoding information using fewer bits (or other information-bearing units) than an unencoded representation would use, through use of specific encoding schemes.”
“Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications.”
I found this article fairly interesting. I knew the basic premise of data compression but didn’t know how often this process was used; it was more than I thought. It also explained the difference between different types of compression:
“Lossless compression algorithms usually exploit statistical redundancy in such a way as to represent the sender's data more concisely without error. Lossless compression is possible because most real-world data has statistical redundancy.”
“Another kind of compression, called lossy data compression or perceptual coding, is possible if some loss of fidelity is acceptable…. Lossy data compression provides a way to obtain the best fidelity for a given amount of compression. In some cases, transparent (unnoticeable) compression is desired; in other cases, fidelity is sacrificed to reduce the amount of data as much as possible.”
Overall data compression sounds like a very useful way to save storage space or bandwidth, but care must be made to ensure the right process is used with different types of data so that least amount of fidelity is lost in the process.
Data compression basics
(http://dvd-hq.info/data_compression_1.php)
I liked how at the beginning of the article they clarified that the information in it was meant for an audience of all backgrounds, not just information theory or programming, and also how they separated the more complex (or less relevant, as they called it) points from the main body of the article.
“The fundamental idea behind digital data compression is to take a given representation of information (a chunk of binary data) and replace it with a different representation (another chunk of binary data) that takes up less space (space here being measured in binary digits, better known as bits), and from which the original information can later be recovered. If the recovered information is guaranteed to be exactly identical to the original, the compression method is described as "lossless". If the recovered information is not guaranteed to be exactly identical, the compression method is described as "lossy".”
The articles were a long read with a lot of specific details, but I thought it was all well-organized and would be a great resource to go back to if we ever needed it.
Imaging Pittsburgh: Creating a shared gateway to digital image collections of the Pittsburgh region
by Edward A. Galloway
(http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1141/1061)
“The main focus of our project is to create a single Web gateway for the public to access thousands of visual images from photographic collections held by the Archives Service Center of the University of Pittsburgh, Carnegie Museum of Art, and the Historical Society of Western Pennsylvania.”
“An obvious benefit for users working with the collections as a group is the ability to obtain a wider picture of events and people, not too mention changes to localities, infrastructure, and land use. This is an important facet to mention since the collections document many different perspectives of the city throughout time.”
I particularly enjoyed reading this article – not only because it deals with digitizing and making available large numbers of images of the history of Pittsburgh, but that it’s a type of project I feel that I’d love to work on someday. I’m fascinated with the history of Pittsburgh to begin with, and I’d love to look through their online collection in my free time to explore more of the history of the city.
YouTube and libraries: It could be a beautiful relationship
by Paula L. Webb
(http://www.lita.org/ala/mgrps/divs/acrl/publications/crlnews/2007/jun/youtube.cfm)
The link in the syllabus to the article didn’t work, so I had to do a bit of searching to find it – I got it eventually, though!
This article is about the idea of libraries using Youtube to help reach out to people over the internet, explaining how beneficial it is for libraries to put out videos explaining how to use their services, and any other information new users might find useful before visiting the library in person.
Most of this article is explaining features about Youtube that I already know and have used. It’s kind of a broad suggestion to make, since any company out there can use this idea to their advantage, but I still think it would be useful. It would be extremely easy for users to view tutorials and instructional videos about a library on Youtube, and might save a lot of time instead of going in and asking in person first.
(http://en.wikipedia.org/wiki/Data_compression)
“Data compression or source coding is the process of encoding information using fewer bits (or other information-bearing units) than an unencoded representation would use, through use of specific encoding schemes.”
“Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications.”
I found this article fairly interesting. I knew the basic premise of data compression but didn’t know how often this process was used; it was more than I thought. It also explained the difference between different types of compression:
“Lossless compression algorithms usually exploit statistical redundancy in such a way as to represent the sender's data more concisely without error. Lossless compression is possible because most real-world data has statistical redundancy.”
“Another kind of compression, called lossy data compression or perceptual coding, is possible if some loss of fidelity is acceptable…. Lossy data compression provides a way to obtain the best fidelity for a given amount of compression. In some cases, transparent (unnoticeable) compression is desired; in other cases, fidelity is sacrificed to reduce the amount of data as much as possible.”
Overall data compression sounds like a very useful way to save storage space or bandwidth, but care must be made to ensure the right process is used with different types of data so that least amount of fidelity is lost in the process.
Data compression basics
(http://dvd-hq.info/data_compression_1.php)
I liked how at the beginning of the article they clarified that the information in it was meant for an audience of all backgrounds, not just information theory or programming, and also how they separated the more complex (or less relevant, as they called it) points from the main body of the article.
“The fundamental idea behind digital data compression is to take a given representation of information (a chunk of binary data) and replace it with a different representation (another chunk of binary data) that takes up less space (space here being measured in binary digits, better known as bits), and from which the original information can later be recovered. If the recovered information is guaranteed to be exactly identical to the original, the compression method is described as "lossless". If the recovered information is not guaranteed to be exactly identical, the compression method is described as "lossy".”
The articles were a long read with a lot of specific details, but I thought it was all well-organized and would be a great resource to go back to if we ever needed it.
Imaging Pittsburgh: Creating a shared gateway to digital image collections of the Pittsburgh region
by Edward A. Galloway
(http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1141/1061)
“The main focus of our project is to create a single Web gateway for the public to access thousands of visual images from photographic collections held by the Archives Service Center of the University of Pittsburgh, Carnegie Museum of Art, and the Historical Society of Western Pennsylvania.”
“An obvious benefit for users working with the collections as a group is the ability to obtain a wider picture of events and people, not too mention changes to localities, infrastructure, and land use. This is an important facet to mention since the collections document many different perspectives of the city throughout time.”
I particularly enjoyed reading this article – not only because it deals with digitizing and making available large numbers of images of the history of Pittsburgh, but that it’s a type of project I feel that I’d love to work on someday. I’m fascinated with the history of Pittsburgh to begin with, and I’d love to look through their online collection in my free time to explore more of the history of the city.
YouTube and libraries: It could be a beautiful relationship
by Paula L. Webb
(http://www.lita.org/ala/mgrps/divs/acrl/publications/crlnews/2007/jun/youtube.cfm)
The link in the syllabus to the article didn’t work, so I had to do a bit of searching to find it – I got it eventually, though!
This article is about the idea of libraries using Youtube to help reach out to people over the internet, explaining how beneficial it is for libraries to put out videos explaining how to use their services, and any other information new users might find useful before visiting the library in person.
Most of this article is explaining features about Youtube that I already know and have used. It’s kind of a broad suggestion to make, since any company out there can use this idea to their advantage, but I still think it would be useful. It would be extremely easy for users to view tutorials and instructional videos about a library on Youtube, and might save a lot of time instead of going in and asking in person first.
Monday, September 21, 2009
Thursday, September 17, 2009
Week 4 reading notes
Database:
(http://en.wikipedia.org/wiki/Database)
Most of these things I didn’t know previously, so this will be mostly notes with a couple thoughts here and there. Notes in quotations are taken from the Wikipedia article above.
A database is “an integrated collection of logically related records or files consolidated into a common pool that provides data for many applications. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images.”
The data in a database is organized according to a database model, the most common one being the relational model.
Architecture:
On-line Transaction Processing systems (OLTP) use “row oriented” datastore architecture, while data-warehouse and other retrieval-focused applications or bibliographic database (library catalogue) systems may use a column-oriented DBMS (database management system) architecture.
Database management systems:
A DBMS system is software that organizes storage of data, controlling “the creation, maintenance, and use of the database storage structures of an organization and its end users.”
DBMS has five main components:
- Interface drivers: provide methods to prepare and execute statements, get results, etc.
- SQL engine (comprises the three major components below)
- Transaction engine
- Relational engine
- Storage engine
ODBMS has four main components:
(I’m assuming the O stands for Online? The article doesn’t say.)
-Language drivers
-Query engine
-Transaction engine
-Storage engine
Primary tasks of DBMS packages include:
-Database Development: defines and organizes the content, relationships, and structure of the data needed to build a database.
-Database Interrogation: accesses the data in a database for information retrieval. Users can selectively retrieve and display information and produce printed documents.
-Database Maintenance: used to “add, delete, update, correct, and protect the data in a database.”
-Application Development: used to “develop prototypes of data entry screens, queries, forms, reports, tables, and labels for a prototyped application.”
Types of databases:
-Operational
-Analytical
-Data
-Distributed
-End-user
-External
-Hypermedia
-Navigational
-In-memory
-Document-oriented
-Real-time
All databases take advantage of indexing to increase speed. “The most common kind of index is a sorted list of the contents of some particular table column, with pointers to the row associated with the value.”
Database software should enforce the ACID rules:
-Atomicity
-Consistency
-Isolation
-Durability
Many DBMS’s relax a lot of these rules for better performance.
Security is enforced through access control, auditing, and encryption.
“Databases are used in many applications, spanning virtually the entire range of computer software. Databases are the preferred method of storage for large multiuser applications, where coordination between many users is needed.”
My notes: There were a few terms mentioned in the article that were never explained or linked to other articles: for example, SQL, ODBMS, and RDBMS (what the O and R stand for). Other than that it was a decent introduction to the concept of DBMS’s and how they work.
~&~
Anne J. Gilliland. Introduction to Metadata, pathways to Digital Information: 1: Setting the Stage
(http://www.getty.edu/research/conducting_research/standards/intrometadata/setting.html)
Again, all quotes are directly from the article:
Metadata means “data about data”.
“Until the mid-1990s…. metadata referred to a suite of industry or disciplinary standards as well as additional internal and external documentation and other data necessary for the identification, representation, interoperability, technical management, performance, and use of data contained in an information system.”
“In general, all information objects, regardless of the physical or intellectual form they take, have three features…. all of which can and should be reflected through metadata:
-Content relates to what the object contains or is about and is intrinsic to an information object.
-Context indicates the who, what, why, where, and how aspects associated with the object's creation and is extrinsic to an information object.
-Structure relates to the formal set of associations within or among individual information objects and can be intrinsic or extrinsic or both.”
“Library metadata development has been first and foremost about providing intellectual and physical access to collection materials. Library metadata includes indexes, abstracts, and bibliographic records created according to cataloging rules (data content standards).”
“In an environment where a user can gain unmediated access to information objects over a network, metadata
-certifies the authenticity and degree of completeness of the content;
-establishes and documents the context of the content;
-identifies and exploits the structural relationships that exist within and between information objects;
-provides a range of intellectual access points for an increasingly diverse range of users; and
-provides some of the information that an information professional might have provided in a traditional, in-person reference or research setting.”
“Repositories also create metadata relating to the administration, accessioning, preservation, and use of collections…. Integrated information resources such as virtual museums, digital libraries, and archival information systems include digital versions of actual collection content (sometimes referred to as digital surrogates), as well as descriptions of that content (i.e., descriptive metadata, in a variety of formats).”
“Metadata not only identifies and describes an information object; it also documents how that object behaves, its function and use, its relationship to other information objects, and how it should be and has been managed over time.”
Different Types of Metadata…
-Administrative
-Descriptive
-Preservation
-Technical
-Use
Primary Functions of Metadata…
-Creation, multiversioning, reuse, and recontextualization of information objects
-Organization and description
-Validation
-Utilization and preservation
-Disposition
Some Little-Known Facts about Metadata…
-Doesn’t have to be digital
-Is more than the description of an object
-Comes from a variety of sources
-Accumulates during the life of an information object or system
-One information object's metadata can simultaneously be another’s data, depending on aggregations of and dependencies between information objects and systems
Why Is Metadata Important?
-Increased accessibility
-Retention of context
-Expanding use
-Learning metadata
-System development and enhancement
-Multiversioning
-Legal issues
-Preservation and persistence
“Metadata provides us with the Rosetta stone that will make it possible to decode information objects and their transformation into knowledge in the cultural heritage information systems of the future.”
My notes: It took me a while to get through this article. The language was relatively easy to understand, but there was a lot of fact-stating and not a lot of examples, which are generally helpful to me in understanding a subject. I did like how she organized a lot of the facts about metadata into tables, which I’ve organized into short lists here. Presenting the information that way was an effective way to get a lot of information across without seeming bogged-down.
~&~
Eric J. Miller. An Overview of the Dublin Core Data Model
(http://dublincore.org/1999/06/06-overview/)
“The Dublin Core Metadata Initiative (DCMI) is a international effort designed to foster consensus across disciplines for the discovery-oriented description of diverse resources in an electronic environment…. The requirement of providing the means for a modular, extensible, metadata architecture to address local or discipline-specific descriptive needs has been identified since the very beginning of the DCMI work [WF]. The formalized representation of this requirement has been the basis for the Dublin Core Data Model activity.”
DCMI Requirements…
-Internationalization
-Modularization/Extensibility
-Element Identity
-Semantic Refinement
-Identification of encoding schemes
-Specification of controlled vocabularies
-Identification of structured compound values
The Basic Dublin Core Data Model…
-There are resources in the world that we would like to describe. These resources have properties associated with them. The values of these properties can be literals (e.g. string-values) or other resources.
-A resource can be anything that can be uniquely identified.
-Properties are specific types of resources.
-Classes of objects are specific types of resources.
-Literals are terminal resources. (Literals are simple text strings).
My notes: I’m not really sure what to say about this article. It states that it’s an overview and a work in progress, but it’s dated from 1999, so I’m kind of curious to see what their status is now. With all the advancements in technology over the past ten years, I wonder if their model or any of their requirements have changed since then.
(http://en.wikipedia.org/wiki/Database)
Most of these things I didn’t know previously, so this will be mostly notes with a couple thoughts here and there. Notes in quotations are taken from the Wikipedia article above.
A database is “an integrated collection of logically related records or files consolidated into a common pool that provides data for many applications. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images.”
The data in a database is organized according to a database model, the most common one being the relational model.
Architecture:
On-line Transaction Processing systems (OLTP) use “row oriented” datastore architecture, while data-warehouse and other retrieval-focused applications or bibliographic database (library catalogue) systems may use a column-oriented DBMS (database management system) architecture.
Database management systems:
A DBMS system is software that organizes storage of data, controlling “the creation, maintenance, and use of the database storage structures of an organization and its end users.”
DBMS has five main components:
- Interface drivers: provide methods to prepare and execute statements, get results, etc.
- SQL engine (comprises the three major components below)
- Transaction engine
- Relational engine
- Storage engine
ODBMS has four main components:
(I’m assuming the O stands for Online? The article doesn’t say.)
-Language drivers
-Query engine
-Transaction engine
-Storage engine
Primary tasks of DBMS packages include:
-Database Development: defines and organizes the content, relationships, and structure of the data needed to build a database.
-Database Interrogation: accesses the data in a database for information retrieval. Users can selectively retrieve and display information and produce printed documents.
-Database Maintenance: used to “add, delete, update, correct, and protect the data in a database.”
-Application Development: used to “develop prototypes of data entry screens, queries, forms, reports, tables, and labels for a prototyped application.”
Types of databases:
-Operational
-Analytical
-Data
-Distributed
-End-user
-External
-Hypermedia
-Navigational
-In-memory
-Document-oriented
-Real-time
All databases take advantage of indexing to increase speed. “The most common kind of index is a sorted list of the contents of some particular table column, with pointers to the row associated with the value.”
Database software should enforce the ACID rules:
-Atomicity
-Consistency
-Isolation
-Durability
Many DBMS’s relax a lot of these rules for better performance.
Security is enforced through access control, auditing, and encryption.
“Databases are used in many applications, spanning virtually the entire range of computer software. Databases are the preferred method of storage for large multiuser applications, where coordination between many users is needed.”
My notes: There were a few terms mentioned in the article that were never explained or linked to other articles: for example, SQL, ODBMS, and RDBMS (what the O and R stand for). Other than that it was a decent introduction to the concept of DBMS’s and how they work.
~&~
Anne J. Gilliland. Introduction to Metadata, pathways to Digital Information: 1: Setting the Stage
(http://www.getty.edu/research/conducting_research/standards/intrometadata/setting.html)
Again, all quotes are directly from the article:
Metadata means “data about data”.
“Until the mid-1990s…. metadata referred to a suite of industry or disciplinary standards as well as additional internal and external documentation and other data necessary for the identification, representation, interoperability, technical management, performance, and use of data contained in an information system.”
“In general, all information objects, regardless of the physical or intellectual form they take, have three features…. all of which can and should be reflected through metadata:
-Content relates to what the object contains or is about and is intrinsic to an information object.
-Context indicates the who, what, why, where, and how aspects associated with the object's creation and is extrinsic to an information object.
-Structure relates to the formal set of associations within or among individual information objects and can be intrinsic or extrinsic or both.”
“Library metadata development has been first and foremost about providing intellectual and physical access to collection materials. Library metadata includes indexes, abstracts, and bibliographic records created according to cataloging rules (data content standards).”
“In an environment where a user can gain unmediated access to information objects over a network, metadata
-certifies the authenticity and degree of completeness of the content;
-establishes and documents the context of the content;
-identifies and exploits the structural relationships that exist within and between information objects;
-provides a range of intellectual access points for an increasingly diverse range of users; and
-provides some of the information that an information professional might have provided in a traditional, in-person reference or research setting.”
“Repositories also create metadata relating to the administration, accessioning, preservation, and use of collections…. Integrated information resources such as virtual museums, digital libraries, and archival information systems include digital versions of actual collection content (sometimes referred to as digital surrogates), as well as descriptions of that content (i.e., descriptive metadata, in a variety of formats).”
“Metadata not only identifies and describes an information object; it also documents how that object behaves, its function and use, its relationship to other information objects, and how it should be and has been managed over time.”
Different Types of Metadata…
-Administrative
-Descriptive
-Preservation
-Technical
-Use
Primary Functions of Metadata…
-Creation, multiversioning, reuse, and recontextualization of information objects
-Organization and description
-Validation
-Utilization and preservation
-Disposition
Some Little-Known Facts about Metadata…
-Doesn’t have to be digital
-Is more than the description of an object
-Comes from a variety of sources
-Accumulates during the life of an information object or system
-One information object's metadata can simultaneously be another’s data, depending on aggregations of and dependencies between information objects and systems
Why Is Metadata Important?
-Increased accessibility
-Retention of context
-Expanding use
-Learning metadata
-System development and enhancement
-Multiversioning
-Legal issues
-Preservation and persistence
“Metadata provides us with the Rosetta stone that will make it possible to decode information objects and their transformation into knowledge in the cultural heritage information systems of the future.”
My notes: It took me a while to get through this article. The language was relatively easy to understand, but there was a lot of fact-stating and not a lot of examples, which are generally helpful to me in understanding a subject. I did like how she organized a lot of the facts about metadata into tables, which I’ve organized into short lists here. Presenting the information that way was an effective way to get a lot of information across without seeming bogged-down.
~&~
Eric J. Miller. An Overview of the Dublin Core Data Model
(http://dublincore.org/1999/06/06-overview/)
“The Dublin Core Metadata Initiative (DCMI) is a international effort designed to foster consensus across disciplines for the discovery-oriented description of diverse resources in an electronic environment…. The requirement of providing the means for a modular, extensible, metadata architecture to address local or discipline-specific descriptive needs has been identified since the very beginning of the DCMI work [WF]. The formalized representation of this requirement has been the basis for the Dublin Core Data Model activity.”
DCMI Requirements…
-Internationalization
-Modularization/Extensibility
-Element Identity
-Semantic Refinement
-Identification of encoding schemes
-Specification of controlled vocabularies
-Identification of structured compound values
The Basic Dublin Core Data Model…
-There are resources in the world that we would like to describe. These resources have properties associated with them. The values of these properties can be literals (e.g. string-values) or other resources.
-A resource can be anything that can be uniquely identified.
-Properties are specific types of resources.
-Classes of objects are specific types of resources.
-Literals are terminal resources. (Literals are simple text strings).
My notes: I’m not really sure what to say about this article. It states that it’s an overview and a work in progress, but it’s dated from 1999, so I’m kind of curious to see what their status is now. With all the advancements in technology over the past ten years, I wonder if their model or any of their requirements have changed since then.
Monday, September 14, 2009
Friday, September 11, 2009
Week 3 reading notes
Machtelt Garrels. “Introduction to Linux: A Hands on Guide”
(http://tldp.org/LDP/intro-linux/html/chap_01.html)
In the old days, every computer had a different operating system. Software for one didn’t work on another. Garrels writes, “In 1969, a team of developers in the Bell Labs laboratories started working on a solution for the software problem, to address these compatibility issues. They developed a new operating system, which was simple and elegant, written in the C programming language instead of in assembly code, [and] able to recycle code. The Bell Labs developers named their project "UNIX.””
Linux later became an implementation of UNIX, as Garrels writes, “Linus is a full UNIX clone, fit for use on workstations as well as on middle-range and high-end servers.”
There were a few things I found confusing. At times the writer mentioned some terms that were never explained, like he expected us to know what they were already. For example, what is comp.os.minix? And what exactly is POSIX? It never said. It said it’s a standard for UNIX but nothing more. It also said UNIX was gradually being called Linux, but why, exactly, if it’s essentially the same thing?
I know next to nothing about Linux, but from what I learned in this reading, it sounds like a more effective and usable operating system than Windows or Mac, as long as you understand how to use it and how it works. I would be hesitant about trying it myself, though, since it said that though progress is being made, it is not very user-friendly for beginners.
“What is Mac OS X?” By Amit Singh
(http://osxbook.com/book/bonus/ancient/whatismacosx//)
Having never owned a Mac, I found this article to be even more confusing than the Linux one. Since I’ve used them seldom and don’t really know anything about Macs, the names of all the programs are just names to me, and don’t really mean anything else. Maybe it’s because I’m used to Windows, but the whole Mac operating system just seems twice as complicated for me to understand. Open Firmware and Bootloader especially seemed tremendously complicated. I understand that they can be powerful tools, but I think you need to be an expert in order to be able to run them effectively.
However, I was pleasantly surprised to learn that Mach, which XNU was based on, originated as a research project at Carnegie Mellon University in the mid-80s.
Paul Thurott “An Update on the Windows Roadmap”
(http://community.winsupersite.com/blogs/paul/archive/2008/06/27/an-update-on-the-windows-roadmap.aspx)
I honestly don’t understand all the backlash Windows Vista has received lately. I recently got a Dell laptop with Vista, and so far it has given me no problems. Maybe it’s because I’m not a techie and don’t get exactly how differently Windows systems work from each other, but I’ve never found Vista to be particularly hard to use.
To sort of touch on the Windows vs. Mac debate, since our family got our first computer in 2001 (yes, we were latecomers!) that had Windows ME, none of us have ever had problems with any of our computers that were Windows’ fault. There were a couple crashes, but no unrecoverable memory loss. I have several friends who have worked with all sorts of computers for years, and according to them, if you have a Windows machine that continually crashes, it’s something you are doing wrong, and it’s not the program’s fault. I believe that as long as you use it smartly (not shutting down random services without knowing their function, running anti-virus programs that are configured to work the best for each particular machine, and configuring firewalls to match) a Windows PC will run reliably for years. That’s not to say I don’t like Macs or think that they are unreliable, but I think it’s a complete myth to say that they never screw up or crash. From what I understand, they can crash just as often as PCs and are beyond annoying to deal with when something goes wrong. And when something goes wrong, it’s bad.
(http://tldp.org/LDP/intro-linux/html/chap_01.html)
In the old days, every computer had a different operating system. Software for one didn’t work on another. Garrels writes, “In 1969, a team of developers in the Bell Labs laboratories started working on a solution for the software problem, to address these compatibility issues. They developed a new operating system, which was simple and elegant, written in the C programming language instead of in assembly code, [and] able to recycle code. The Bell Labs developers named their project "UNIX.””
Linux later became an implementation of UNIX, as Garrels writes, “Linus is a full UNIX clone, fit for use on workstations as well as on middle-range and high-end servers.”
There were a few things I found confusing. At times the writer mentioned some terms that were never explained, like he expected us to know what they were already. For example, what is comp.os.minix? And what exactly is POSIX? It never said. It said it’s a standard for UNIX but nothing more. It also said UNIX was gradually being called Linux, but why, exactly, if it’s essentially the same thing?
I know next to nothing about Linux, but from what I learned in this reading, it sounds like a more effective and usable operating system than Windows or Mac, as long as you understand how to use it and how it works. I would be hesitant about trying it myself, though, since it said that though progress is being made, it is not very user-friendly for beginners.
“What is Mac OS X?” By Amit Singh
(http://osxbook.com/book/bonus/ancient/whatismacosx//)
Having never owned a Mac, I found this article to be even more confusing than the Linux one. Since I’ve used them seldom and don’t really know anything about Macs, the names of all the programs are just names to me, and don’t really mean anything else. Maybe it’s because I’m used to Windows, but the whole Mac operating system just seems twice as complicated for me to understand. Open Firmware and Bootloader especially seemed tremendously complicated. I understand that they can be powerful tools, but I think you need to be an expert in order to be able to run them effectively.
However, I was pleasantly surprised to learn that Mach, which XNU was based on, originated as a research project at Carnegie Mellon University in the mid-80s.
Paul Thurott “An Update on the Windows Roadmap”
(http://community.winsupersite.com/blogs/paul/archive/2008/06/27/an-update-on-the-windows-roadmap.aspx)
I honestly don’t understand all the backlash Windows Vista has received lately. I recently got a Dell laptop with Vista, and so far it has given me no problems. Maybe it’s because I’m not a techie and don’t get exactly how differently Windows systems work from each other, but I’ve never found Vista to be particularly hard to use.
To sort of touch on the Windows vs. Mac debate, since our family got our first computer in 2001 (yes, we were latecomers!) that had Windows ME, none of us have ever had problems with any of our computers that were Windows’ fault. There were a couple crashes, but no unrecoverable memory loss. I have several friends who have worked with all sorts of computers for years, and according to them, if you have a Windows machine that continually crashes, it’s something you are doing wrong, and it’s not the program’s fault. I believe that as long as you use it smartly (not shutting down random services without knowing their function, running anti-virus programs that are configured to work the best for each particular machine, and configuring firewalls to match) a Windows PC will run reliably for years. That’s not to say I don’t like Macs or think that they are unreliable, but I think it’s a complete myth to say that they never screw up or crash. From what I understand, they can crash just as often as PCs and are beyond annoying to deal with when something goes wrong. And when something goes wrong, it’s bad.
Week 2 reading notes
I have to apologize for the extreme lateness in posting these - I got very confused about when each readings were due when and only recently got it straightened out. Week 3's reading notes will be up later today.
Notes on personal computer hardware:
(http://en.wikipedia.org/wiki/Computer_hardware)
Typical PC hardware includes:
Motherboard
-- Central Processing Unit (CPU)
-- Chipset
-- RAM
-- Basic Input Output System (BIOS)
-- Internal buses
-- External bus controllers
Power supply
-- Power cords, switch, cooling fan
Video display controller (graphics card)
Removable media devices (storage)
-- CD
-- DVD
-- Blu-Ray
-- USB flash drive
-- Tape drive
Internal storage
-- Hard disc
-- Solid-state drive
-- RAID array controller
Sound card
Input:
Text input devices
-- Keyboard
Pointing devices
-- Mouse
-- Optical Mouse
-- Trackball
Gaming devices
-- Joystick
-- Gamepad
-- Game controller
Image, Video input devices
-- Image scanner
-- Webcam
Audio input devices
-- Microphone
Though this entry did not go into great detail about descriptions of all of this hardware, it provided links to other Wikipedia entries which talked about them in more depth. For someone like me who does not know a lot about how the technical side of computers work, it was effective in helping me understand a bit more about it. A lot of it I had always sort of overlooked as common knowledge, but it was nice to see everything categorized and listed together, along with links to more in-depth descriptions.
Moore's law
(http://en.wikipedia.org/wiki/Moore%27s_law)
Moore’s Law basically states that since its invention in 1958, the number of transistors that can be placed on an integrated circuit doubles about every two years. He stated this in 1965 and so far this trend has continued to this day, and though he doesn’t expect it to last forever, it is not expected to stop for at least another five years.
When you’re looking ahead to the future, it gets to the point where you wonder how much more things like this can improve - like for example, are we going to get to the point where we can store terabytes or more of information on a computer the size of an iPod? I have a hard time imagining what more can be done, but this trend has continued for so long and with such consistency that I also can’t really imagine it tapering off anytime soon. Progress is made in technology so continuously that we are always being surprised and impressed with its improvements.
The Computer History Museum
(http://www.computerhistory.org/)
I've always been fascinated with comparing modern things to items of the past, discovering their origins and seeing how much they've changed, so I found this website to be extremely interesting. A lot of the technical electrical jargon went over my head, but it was still interesting to see how everything progressed over the course of the years. I particularly enjoyed the timeline of computer storage history, and looking over all of the old models of computers from the 30s to the early 90s.
Notes on personal computer hardware:
(http://en.wikipedia.org/wiki/Computer_hardware)
Typical PC hardware includes:
Motherboard
-- Central Processing Unit (CPU)
-- Chipset
-- RAM
-- Basic Input Output System (BIOS)
-- Internal buses
-- External bus controllers
Power supply
-- Power cords, switch, cooling fan
Video display controller (graphics card)
Removable media devices (storage)
-- CD
-- DVD
-- Blu-Ray
-- USB flash drive
-- Tape drive
Internal storage
-- Hard disc
-- Solid-state drive
-- RAID array controller
Sound card
Input:
Text input devices
-- Keyboard
Pointing devices
-- Mouse
-- Optical Mouse
-- Trackball
Gaming devices
-- Joystick
-- Gamepad
-- Game controller
Image, Video input devices
-- Image scanner
-- Webcam
Audio input devices
-- Microphone
Though this entry did not go into great detail about descriptions of all of this hardware, it provided links to other Wikipedia entries which talked about them in more depth. For someone like me who does not know a lot about how the technical side of computers work, it was effective in helping me understand a bit more about it. A lot of it I had always sort of overlooked as common knowledge, but it was nice to see everything categorized and listed together, along with links to more in-depth descriptions.
Moore's law
(http://en.wikipedia.org/wiki/Moore%27s_law)
Moore’s Law basically states that since its invention in 1958, the number of transistors that can be placed on an integrated circuit doubles about every two years. He stated this in 1965 and so far this trend has continued to this day, and though he doesn’t expect it to last forever, it is not expected to stop for at least another five years.
When you’re looking ahead to the future, it gets to the point where you wonder how much more things like this can improve - like for example, are we going to get to the point where we can store terabytes or more of information on a computer the size of an iPod? I have a hard time imagining what more can be done, but this trend has continued for so long and with such consistency that I also can’t really imagine it tapering off anytime soon. Progress is made in technology so continuously that we are always being surprised and impressed with its improvements.
The Computer History Museum
(http://www.computerhistory.org/)
I've always been fascinated with comparing modern things to items of the past, discovering their origins and seeing how much they've changed, so I found this website to be extremely interesting. A lot of the technical electrical jargon went over my head, but it was still interesting to see how everything progressed over the course of the years. I particularly enjoyed the timeline of computer storage history, and looking over all of the old models of computers from the 30s to the early 90s.
Thursday, September 10, 2009
Link to my Flickr photostream
Here is the link to my photostream on Flickr:
http://www.flickr.com/photos/42457576@N08/
I'll be updating it with my pictures for Assignment #2 in due time!
http://www.flickr.com/photos/42457576@N08/
I'll be updating it with my pictures for Assignment #2 in due time!
Muddiest Point for Week 2
I was wondering exactly how RAM worked - what kind of data is it that it stores and is lost when the computer is turned off? Is it like Internet cookies, or the Temporary Internet Files folder's contents on a PC? Or does it have more to do with actions you perform offline?
Tuesday, September 8, 2009
Week 1, Assignment 1
Notes gathered from “2004 Information Format Trends: Content, Not Containers”
(OCLC report: Information Format Trends: Content, Not Containers (2004). http://www.oclc.org/reports/2004format.htm)
This paper claims that content consumers generally don’t care what sort of form information content comes in, such as books, journals, or Web pages.
2. Mark Federman, “What is the Meaning of the Medium is the Message?” n.d., http://www.mcluhan.utoronto.ca/article_mediumisthemessage.htm (viewed July 18, 2004).
36. Primary Research Group, The Survey of Academic Libraries, 2004, Press release, PRWeb, March 2004, http://www.prweb.com/releases/2004/3/prweb112699.htm (viewed July 19, 2004).
53. Mani Shabrang, Dow Chemical Business Intelligence Center as quoted in Drew Robb, “Text Mining Tools Take On Unstructured Data,” Computerworld, June 21, 2004, n.p., http://www.computerworld.com/databasetopics/businessintelligence/story/ 0,10801,93968,00.html (viewed July 18, 2004).
~&~
Notes from “Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture”
(Clifford Lynch, “Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture” http://www.cni.org/staff/cliffpubs/info_and_IT_literacy.pdf)
In this paper, Lynch deals with the differences between information technology literacy and information literacy, and emphasizes the need for people of all walks of life to be well-versed and up-to-date in the skills needed to operate and understand information technology, as information technology tools become obsolete so quickly. He gives his view of the difference between information technology literacy and information literacy: he believes information technology literacy deals with understanding the technology tools that support everyday life, while information literacy deals with the content itself and communication. He also gives what he believes are the two general perspectives of information technology literacy: the first emphasizes skills in the use of information technology tools, while the second focuses on understanding how technologies and systems work.
(OCLC report: Information Format Trends: Content, Not Containers (2004). http://www.oclc.org/reports/2004format.htm)
This paper claims that content consumers generally don’t care what sort of form information content comes in, such as books, journals, or Web pages.
- According to Mark Federman of the McLuhan Program in Culture and Technology at the University of Toronto, the “message” of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.2
- A recent study shows that almost 41 percent of the academic libraries sampled plan to “aggressively” reduce spending for print and increase expenditures for electronic resources.36
- What seems clear is that libraries should move beyond the role of collector and organizer of content, print and digital, to one that establishes the authenticity and provenance of content and provides the imprimatur of quality in an information rich but context-poor world. The challenge is how to do this. The best way to adapt is to understand what’s forcing the change.
- This new world is abundant and unstructured, but contextual mechanisms for navigating and synthesizing the information commons are scarce, even in—perhaps especially in—libraries. “We are drowning in information but are starving for knowledge. Information is only useful when it can be located and synthesized into knowledge.”53
2. Mark Federman, “What is the Meaning of the Medium is the Message?” n.d., http://www.mcluhan.utoronto.ca/article_mediumisthemessage.htm (viewed July 18, 2004).
36. Primary Research Group, The Survey of Academic Libraries, 2004, Press release, PRWeb, March 2004, http://www.prweb.com/releases/2004/3/prweb112699.htm (viewed July 19, 2004).
53. Mani Shabrang, Dow Chemical Business Intelligence Center as quoted in Drew Robb, “Text Mining Tools Take On Unstructured Data,” Computerworld, June 21, 2004, n.p., http://www.computerworld.com/databasetopics/businessintelligence/story/ 0,10801,93968,00.html (viewed July 18, 2004).
~&~
Notes from “Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture”
(Clifford Lynch, “Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture” http://www.cni.org/staff/cliffpubs/info_and_IT_literacy.pdf)
In this paper, Lynch deals with the differences between information technology literacy and information literacy, and emphasizes the need for people of all walks of life to be well-versed and up-to-date in the skills needed to operate and understand information technology, as information technology tools become obsolete so quickly. He gives his view of the difference between information technology literacy and information literacy: he believes information technology literacy deals with understanding the technology tools that support everyday life, while information literacy deals with the content itself and communication. He also gives what he believes are the two general perspectives of information technology literacy: the first emphasizes skills in the use of information technology tools, while the second focuses on understanding how technologies and systems work.
Wednesday, September 2, 2009
Hello!
This will be my blog for the Fall 2009 Introduction to Information Technology course (LIS 2600) at Pitt.
:)
:)
Subscribe to:
Posts (Atom)