Friday, September 25, 2009
Muddiest Point for Week 4
Just realized I didn't post last week saying I didn't have a muddiest point. Oops... Well, I don't have one this week either. :)
Wednesday, September 23, 2009
Week 5 reading notes
Data compression
(http://en.wikipedia.org/wiki/Data_compression)
“Data compression or source coding is the process of encoding information using fewer bits (or other information-bearing units) than an unencoded representation would use, through use of specific encoding schemes.”
“Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications.”
I found this article fairly interesting. I knew the basic premise of data compression but didn’t know how often this process was used; it was more than I thought. It also explained the difference between different types of compression:
“Lossless compression algorithms usually exploit statistical redundancy in such a way as to represent the sender's data more concisely without error. Lossless compression is possible because most real-world data has statistical redundancy.”
“Another kind of compression, called lossy data compression or perceptual coding, is possible if some loss of fidelity is acceptable…. Lossy data compression provides a way to obtain the best fidelity for a given amount of compression. In some cases, transparent (unnoticeable) compression is desired; in other cases, fidelity is sacrificed to reduce the amount of data as much as possible.”
Overall data compression sounds like a very useful way to save storage space or bandwidth, but care must be made to ensure the right process is used with different types of data so that least amount of fidelity is lost in the process.
Data compression basics
(http://dvd-hq.info/data_compression_1.php)
I liked how at the beginning of the article they clarified that the information in it was meant for an audience of all backgrounds, not just information theory or programming, and also how they separated the more complex (or less relevant, as they called it) points from the main body of the article.
“The fundamental idea behind digital data compression is to take a given representation of information (a chunk of binary data) and replace it with a different representation (another chunk of binary data) that takes up less space (space here being measured in binary digits, better known as bits), and from which the original information can later be recovered. If the recovered information is guaranteed to be exactly identical to the original, the compression method is described as "lossless". If the recovered information is not guaranteed to be exactly identical, the compression method is described as "lossy".”
The articles were a long read with a lot of specific details, but I thought it was all well-organized and would be a great resource to go back to if we ever needed it.
Imaging Pittsburgh: Creating a shared gateway to digital image collections of the Pittsburgh region
by Edward A. Galloway
(http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1141/1061)
“The main focus of our project is to create a single Web gateway for the public to access thousands of visual images from photographic collections held by the Archives Service Center of the University of Pittsburgh, Carnegie Museum of Art, and the Historical Society of Western Pennsylvania.”
“An obvious benefit for users working with the collections as a group is the ability to obtain a wider picture of events and people, not too mention changes to localities, infrastructure, and land use. This is an important facet to mention since the collections document many different perspectives of the city throughout time.”
I particularly enjoyed reading this article – not only because it deals with digitizing and making available large numbers of images of the history of Pittsburgh, but that it’s a type of project I feel that I’d love to work on someday. I’m fascinated with the history of Pittsburgh to begin with, and I’d love to look through their online collection in my free time to explore more of the history of the city.
YouTube and libraries: It could be a beautiful relationship
by Paula L. Webb
(http://www.lita.org/ala/mgrps/divs/acrl/publications/crlnews/2007/jun/youtube.cfm)
The link in the syllabus to the article didn’t work, so I had to do a bit of searching to find it – I got it eventually, though!
This article is about the idea of libraries using Youtube to help reach out to people over the internet, explaining how beneficial it is for libraries to put out videos explaining how to use their services, and any other information new users might find useful before visiting the library in person.
Most of this article is explaining features about Youtube that I already know and have used. It’s kind of a broad suggestion to make, since any company out there can use this idea to their advantage, but I still think it would be useful. It would be extremely easy for users to view tutorials and instructional videos about a library on Youtube, and might save a lot of time instead of going in and asking in person first.
(http://en.wikipedia.org/wiki/Data_compression)
“Data compression or source coding is the process of encoding information using fewer bits (or other information-bearing units) than an unencoded representation would use, through use of specific encoding schemes.”
“Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications.”
I found this article fairly interesting. I knew the basic premise of data compression but didn’t know how often this process was used; it was more than I thought. It also explained the difference between different types of compression:
“Lossless compression algorithms usually exploit statistical redundancy in such a way as to represent the sender's data more concisely without error. Lossless compression is possible because most real-world data has statistical redundancy.”
“Another kind of compression, called lossy data compression or perceptual coding, is possible if some loss of fidelity is acceptable…. Lossy data compression provides a way to obtain the best fidelity for a given amount of compression. In some cases, transparent (unnoticeable) compression is desired; in other cases, fidelity is sacrificed to reduce the amount of data as much as possible.”
Overall data compression sounds like a very useful way to save storage space or bandwidth, but care must be made to ensure the right process is used with different types of data so that least amount of fidelity is lost in the process.
Data compression basics
(http://dvd-hq.info/data_compression_1.php)
I liked how at the beginning of the article they clarified that the information in it was meant for an audience of all backgrounds, not just information theory or programming, and also how they separated the more complex (or less relevant, as they called it) points from the main body of the article.
“The fundamental idea behind digital data compression is to take a given representation of information (a chunk of binary data) and replace it with a different representation (another chunk of binary data) that takes up less space (space here being measured in binary digits, better known as bits), and from which the original information can later be recovered. If the recovered information is guaranteed to be exactly identical to the original, the compression method is described as "lossless". If the recovered information is not guaranteed to be exactly identical, the compression method is described as "lossy".”
The articles were a long read with a lot of specific details, but I thought it was all well-organized and would be a great resource to go back to if we ever needed it.
Imaging Pittsburgh: Creating a shared gateway to digital image collections of the Pittsburgh region
by Edward A. Galloway
(http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1141/1061)
“The main focus of our project is to create a single Web gateway for the public to access thousands of visual images from photographic collections held by the Archives Service Center of the University of Pittsburgh, Carnegie Museum of Art, and the Historical Society of Western Pennsylvania.”
“An obvious benefit for users working with the collections as a group is the ability to obtain a wider picture of events and people, not too mention changes to localities, infrastructure, and land use. This is an important facet to mention since the collections document many different perspectives of the city throughout time.”
I particularly enjoyed reading this article – not only because it deals with digitizing and making available large numbers of images of the history of Pittsburgh, but that it’s a type of project I feel that I’d love to work on someday. I’m fascinated with the history of Pittsburgh to begin with, and I’d love to look through their online collection in my free time to explore more of the history of the city.
YouTube and libraries: It could be a beautiful relationship
by Paula L. Webb
(http://www.lita.org/ala/mgrps/divs/acrl/publications/crlnews/2007/jun/youtube.cfm)
The link in the syllabus to the article didn’t work, so I had to do a bit of searching to find it – I got it eventually, though!
This article is about the idea of libraries using Youtube to help reach out to people over the internet, explaining how beneficial it is for libraries to put out videos explaining how to use their services, and any other information new users might find useful before visiting the library in person.
Most of this article is explaining features about Youtube that I already know and have used. It’s kind of a broad suggestion to make, since any company out there can use this idea to their advantage, but I still think it would be useful. It would be extremely easy for users to view tutorials and instructional videos about a library on Youtube, and might save a lot of time instead of going in and asking in person first.
Monday, September 21, 2009
Thursday, September 17, 2009
Week 4 reading notes
Database:
(http://en.wikipedia.org/wiki/Database)
Most of these things I didn’t know previously, so this will be mostly notes with a couple thoughts here and there. Notes in quotations are taken from the Wikipedia article above.
A database is “an integrated collection of logically related records or files consolidated into a common pool that provides data for many applications. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images.”
The data in a database is organized according to a database model, the most common one being the relational model.
Architecture:
On-line Transaction Processing systems (OLTP) use “row oriented” datastore architecture, while data-warehouse and other retrieval-focused applications or bibliographic database (library catalogue) systems may use a column-oriented DBMS (database management system) architecture.
Database management systems:
A DBMS system is software that organizes storage of data, controlling “the creation, maintenance, and use of the database storage structures of an organization and its end users.”
DBMS has five main components:
- Interface drivers: provide methods to prepare and execute statements, get results, etc.
- SQL engine (comprises the three major components below)
- Transaction engine
- Relational engine
- Storage engine
ODBMS has four main components:
(I’m assuming the O stands for Online? The article doesn’t say.)
-Language drivers
-Query engine
-Transaction engine
-Storage engine
Primary tasks of DBMS packages include:
-Database Development: defines and organizes the content, relationships, and structure of the data needed to build a database.
-Database Interrogation: accesses the data in a database for information retrieval. Users can selectively retrieve and display information and produce printed documents.
-Database Maintenance: used to “add, delete, update, correct, and protect the data in a database.”
-Application Development: used to “develop prototypes of data entry screens, queries, forms, reports, tables, and labels for a prototyped application.”
Types of databases:
-Operational
-Analytical
-Data
-Distributed
-End-user
-External
-Hypermedia
-Navigational
-In-memory
-Document-oriented
-Real-time
All databases take advantage of indexing to increase speed. “The most common kind of index is a sorted list of the contents of some particular table column, with pointers to the row associated with the value.”
Database software should enforce the ACID rules:
-Atomicity
-Consistency
-Isolation
-Durability
Many DBMS’s relax a lot of these rules for better performance.
Security is enforced through access control, auditing, and encryption.
“Databases are used in many applications, spanning virtually the entire range of computer software. Databases are the preferred method of storage for large multiuser applications, where coordination between many users is needed.”
My notes: There were a few terms mentioned in the article that were never explained or linked to other articles: for example, SQL, ODBMS, and RDBMS (what the O and R stand for). Other than that it was a decent introduction to the concept of DBMS’s and how they work.
~&~
Anne J. Gilliland. Introduction to Metadata, pathways to Digital Information: 1: Setting the Stage
(http://www.getty.edu/research/conducting_research/standards/intrometadata/setting.html)
Again, all quotes are directly from the article:
Metadata means “data about data”.
“Until the mid-1990s…. metadata referred to a suite of industry or disciplinary standards as well as additional internal and external documentation and other data necessary for the identification, representation, interoperability, technical management, performance, and use of data contained in an information system.”
“In general, all information objects, regardless of the physical or intellectual form they take, have three features…. all of which can and should be reflected through metadata:
-Content relates to what the object contains or is about and is intrinsic to an information object.
-Context indicates the who, what, why, where, and how aspects associated with the object's creation and is extrinsic to an information object.
-Structure relates to the formal set of associations within or among individual information objects and can be intrinsic or extrinsic or both.”
“Library metadata development has been first and foremost about providing intellectual and physical access to collection materials. Library metadata includes indexes, abstracts, and bibliographic records created according to cataloging rules (data content standards).”
“In an environment where a user can gain unmediated access to information objects over a network, metadata
-certifies the authenticity and degree of completeness of the content;
-establishes and documents the context of the content;
-identifies and exploits the structural relationships that exist within and between information objects;
-provides a range of intellectual access points for an increasingly diverse range of users; and
-provides some of the information that an information professional might have provided in a traditional, in-person reference or research setting.”
“Repositories also create metadata relating to the administration, accessioning, preservation, and use of collections…. Integrated information resources such as virtual museums, digital libraries, and archival information systems include digital versions of actual collection content (sometimes referred to as digital surrogates), as well as descriptions of that content (i.e., descriptive metadata, in a variety of formats).”
“Metadata not only identifies and describes an information object; it also documents how that object behaves, its function and use, its relationship to other information objects, and how it should be and has been managed over time.”
Different Types of Metadata…
-Administrative
-Descriptive
-Preservation
-Technical
-Use
Primary Functions of Metadata…
-Creation, multiversioning, reuse, and recontextualization of information objects
-Organization and description
-Validation
-Utilization and preservation
-Disposition
Some Little-Known Facts about Metadata…
-Doesn’t have to be digital
-Is more than the description of an object
-Comes from a variety of sources
-Accumulates during the life of an information object or system
-One information object's metadata can simultaneously be another’s data, depending on aggregations of and dependencies between information objects and systems
Why Is Metadata Important?
-Increased accessibility
-Retention of context
-Expanding use
-Learning metadata
-System development and enhancement
-Multiversioning
-Legal issues
-Preservation and persistence
“Metadata provides us with the Rosetta stone that will make it possible to decode information objects and their transformation into knowledge in the cultural heritage information systems of the future.”
My notes: It took me a while to get through this article. The language was relatively easy to understand, but there was a lot of fact-stating and not a lot of examples, which are generally helpful to me in understanding a subject. I did like how she organized a lot of the facts about metadata into tables, which I’ve organized into short lists here. Presenting the information that way was an effective way to get a lot of information across without seeming bogged-down.
~&~
Eric J. Miller. An Overview of the Dublin Core Data Model
(http://dublincore.org/1999/06/06-overview/)
“The Dublin Core Metadata Initiative (DCMI) is a international effort designed to foster consensus across disciplines for the discovery-oriented description of diverse resources in an electronic environment…. The requirement of providing the means for a modular, extensible, metadata architecture to address local or discipline-specific descriptive needs has been identified since the very beginning of the DCMI work [WF]. The formalized representation of this requirement has been the basis for the Dublin Core Data Model activity.”
DCMI Requirements…
-Internationalization
-Modularization/Extensibility
-Element Identity
-Semantic Refinement
-Identification of encoding schemes
-Specification of controlled vocabularies
-Identification of structured compound values
The Basic Dublin Core Data Model…
-There are resources in the world that we would like to describe. These resources have properties associated with them. The values of these properties can be literals (e.g. string-values) or other resources.
-A resource can be anything that can be uniquely identified.
-Properties are specific types of resources.
-Classes of objects are specific types of resources.
-Literals are terminal resources. (Literals are simple text strings).
My notes: I’m not really sure what to say about this article. It states that it’s an overview and a work in progress, but it’s dated from 1999, so I’m kind of curious to see what their status is now. With all the advancements in technology over the past ten years, I wonder if their model or any of their requirements have changed since then.
(http://en.wikipedia.org/wiki/Database)
Most of these things I didn’t know previously, so this will be mostly notes with a couple thoughts here and there. Notes in quotations are taken from the Wikipedia article above.
A database is “an integrated collection of logically related records or files consolidated into a common pool that provides data for many applications. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images.”
The data in a database is organized according to a database model, the most common one being the relational model.
Architecture:
On-line Transaction Processing systems (OLTP) use “row oriented” datastore architecture, while data-warehouse and other retrieval-focused applications or bibliographic database (library catalogue) systems may use a column-oriented DBMS (database management system) architecture.
Database management systems:
A DBMS system is software that organizes storage of data, controlling “the creation, maintenance, and use of the database storage structures of an organization and its end users.”
DBMS has five main components:
- Interface drivers: provide methods to prepare and execute statements, get results, etc.
- SQL engine (comprises the three major components below)
- Transaction engine
- Relational engine
- Storage engine
ODBMS has four main components:
(I’m assuming the O stands for Online? The article doesn’t say.)
-Language drivers
-Query engine
-Transaction engine
-Storage engine
Primary tasks of DBMS packages include:
-Database Development: defines and organizes the content, relationships, and structure of the data needed to build a database.
-Database Interrogation: accesses the data in a database for information retrieval. Users can selectively retrieve and display information and produce printed documents.
-Database Maintenance: used to “add, delete, update, correct, and protect the data in a database.”
-Application Development: used to “develop prototypes of data entry screens, queries, forms, reports, tables, and labels for a prototyped application.”
Types of databases:
-Operational
-Analytical
-Data
-Distributed
-End-user
-External
-Hypermedia
-Navigational
-In-memory
-Document-oriented
-Real-time
All databases take advantage of indexing to increase speed. “The most common kind of index is a sorted list of the contents of some particular table column, with pointers to the row associated with the value.”
Database software should enforce the ACID rules:
-Atomicity
-Consistency
-Isolation
-Durability
Many DBMS’s relax a lot of these rules for better performance.
Security is enforced through access control, auditing, and encryption.
“Databases are used in many applications, spanning virtually the entire range of computer software. Databases are the preferred method of storage for large multiuser applications, where coordination between many users is needed.”
My notes: There were a few terms mentioned in the article that were never explained or linked to other articles: for example, SQL, ODBMS, and RDBMS (what the O and R stand for). Other than that it was a decent introduction to the concept of DBMS’s and how they work.
~&~
Anne J. Gilliland. Introduction to Metadata, pathways to Digital Information: 1: Setting the Stage
(http://www.getty.edu/research/conducting_research/standards/intrometadata/setting.html)
Again, all quotes are directly from the article:
Metadata means “data about data”.
“Until the mid-1990s…. metadata referred to a suite of industry or disciplinary standards as well as additional internal and external documentation and other data necessary for the identification, representation, interoperability, technical management, performance, and use of data contained in an information system.”
“In general, all information objects, regardless of the physical or intellectual form they take, have three features…. all of which can and should be reflected through metadata:
-Content relates to what the object contains or is about and is intrinsic to an information object.
-Context indicates the who, what, why, where, and how aspects associated with the object's creation and is extrinsic to an information object.
-Structure relates to the formal set of associations within or among individual information objects and can be intrinsic or extrinsic or both.”
“Library metadata development has been first and foremost about providing intellectual and physical access to collection materials. Library metadata includes indexes, abstracts, and bibliographic records created according to cataloging rules (data content standards).”
“In an environment where a user can gain unmediated access to information objects over a network, metadata
-certifies the authenticity and degree of completeness of the content;
-establishes and documents the context of the content;
-identifies and exploits the structural relationships that exist within and between information objects;
-provides a range of intellectual access points for an increasingly diverse range of users; and
-provides some of the information that an information professional might have provided in a traditional, in-person reference or research setting.”
“Repositories also create metadata relating to the administration, accessioning, preservation, and use of collections…. Integrated information resources such as virtual museums, digital libraries, and archival information systems include digital versions of actual collection content (sometimes referred to as digital surrogates), as well as descriptions of that content (i.e., descriptive metadata, in a variety of formats).”
“Metadata not only identifies and describes an information object; it also documents how that object behaves, its function and use, its relationship to other information objects, and how it should be and has been managed over time.”
Different Types of Metadata…
-Administrative
-Descriptive
-Preservation
-Technical
-Use
Primary Functions of Metadata…
-Creation, multiversioning, reuse, and recontextualization of information objects
-Organization and description
-Validation
-Utilization and preservation
-Disposition
Some Little-Known Facts about Metadata…
-Doesn’t have to be digital
-Is more than the description of an object
-Comes from a variety of sources
-Accumulates during the life of an information object or system
-One information object's metadata can simultaneously be another’s data, depending on aggregations of and dependencies between information objects and systems
Why Is Metadata Important?
-Increased accessibility
-Retention of context
-Expanding use
-Learning metadata
-System development and enhancement
-Multiversioning
-Legal issues
-Preservation and persistence
“Metadata provides us with the Rosetta stone that will make it possible to decode information objects and their transformation into knowledge in the cultural heritage information systems of the future.”
My notes: It took me a while to get through this article. The language was relatively easy to understand, but there was a lot of fact-stating and not a lot of examples, which are generally helpful to me in understanding a subject. I did like how she organized a lot of the facts about metadata into tables, which I’ve organized into short lists here. Presenting the information that way was an effective way to get a lot of information across without seeming bogged-down.
~&~
Eric J. Miller. An Overview of the Dublin Core Data Model
(http://dublincore.org/1999/06/06-overview/)
“The Dublin Core Metadata Initiative (DCMI) is a international effort designed to foster consensus across disciplines for the discovery-oriented description of diverse resources in an electronic environment…. The requirement of providing the means for a modular, extensible, metadata architecture to address local or discipline-specific descriptive needs has been identified since the very beginning of the DCMI work [WF]. The formalized representation of this requirement has been the basis for the Dublin Core Data Model activity.”
DCMI Requirements…
-Internationalization
-Modularization/Extensibility
-Element Identity
-Semantic Refinement
-Identification of encoding schemes
-Specification of controlled vocabularies
-Identification of structured compound values
The Basic Dublin Core Data Model…
-There are resources in the world that we would like to describe. These resources have properties associated with them. The values of these properties can be literals (e.g. string-values) or other resources.
-A resource can be anything that can be uniquely identified.
-Properties are specific types of resources.
-Classes of objects are specific types of resources.
-Literals are terminal resources. (Literals are simple text strings).
My notes: I’m not really sure what to say about this article. It states that it’s an overview and a work in progress, but it’s dated from 1999, so I’m kind of curious to see what their status is now. With all the advancements in technology over the past ten years, I wonder if their model or any of their requirements have changed since then.
Monday, September 14, 2009
Friday, September 11, 2009
Week 3 reading notes
Machtelt Garrels. “Introduction to Linux: A Hands on Guide”
(http://tldp.org/LDP/intro-linux/html/chap_01.html)
In the old days, every computer had a different operating system. Software for one didn’t work on another. Garrels writes, “In 1969, a team of developers in the Bell Labs laboratories started working on a solution for the software problem, to address these compatibility issues. They developed a new operating system, which was simple and elegant, written in the C programming language instead of in assembly code, [and] able to recycle code. The Bell Labs developers named their project "UNIX.””
Linux later became an implementation of UNIX, as Garrels writes, “Linus is a full UNIX clone, fit for use on workstations as well as on middle-range and high-end servers.”
There were a few things I found confusing. At times the writer mentioned some terms that were never explained, like he expected us to know what they were already. For example, what is comp.os.minix? And what exactly is POSIX? It never said. It said it’s a standard for UNIX but nothing more. It also said UNIX was gradually being called Linux, but why, exactly, if it’s essentially the same thing?
I know next to nothing about Linux, but from what I learned in this reading, it sounds like a more effective and usable operating system than Windows or Mac, as long as you understand how to use it and how it works. I would be hesitant about trying it myself, though, since it said that though progress is being made, it is not very user-friendly for beginners.
“What is Mac OS X?” By Amit Singh
(http://osxbook.com/book/bonus/ancient/whatismacosx//)
Having never owned a Mac, I found this article to be even more confusing than the Linux one. Since I’ve used them seldom and don’t really know anything about Macs, the names of all the programs are just names to me, and don’t really mean anything else. Maybe it’s because I’m used to Windows, but the whole Mac operating system just seems twice as complicated for me to understand. Open Firmware and Bootloader especially seemed tremendously complicated. I understand that they can be powerful tools, but I think you need to be an expert in order to be able to run them effectively.
However, I was pleasantly surprised to learn that Mach, which XNU was based on, originated as a research project at Carnegie Mellon University in the mid-80s.
Paul Thurott “An Update on the Windows Roadmap”
(http://community.winsupersite.com/blogs/paul/archive/2008/06/27/an-update-on-the-windows-roadmap.aspx)
I honestly don’t understand all the backlash Windows Vista has received lately. I recently got a Dell laptop with Vista, and so far it has given me no problems. Maybe it’s because I’m not a techie and don’t get exactly how differently Windows systems work from each other, but I’ve never found Vista to be particularly hard to use.
To sort of touch on the Windows vs. Mac debate, since our family got our first computer in 2001 (yes, we were latecomers!) that had Windows ME, none of us have ever had problems with any of our computers that were Windows’ fault. There were a couple crashes, but no unrecoverable memory loss. I have several friends who have worked with all sorts of computers for years, and according to them, if you have a Windows machine that continually crashes, it’s something you are doing wrong, and it’s not the program’s fault. I believe that as long as you use it smartly (not shutting down random services without knowing their function, running anti-virus programs that are configured to work the best for each particular machine, and configuring firewalls to match) a Windows PC will run reliably for years. That’s not to say I don’t like Macs or think that they are unreliable, but I think it’s a complete myth to say that they never screw up or crash. From what I understand, they can crash just as often as PCs and are beyond annoying to deal with when something goes wrong. And when something goes wrong, it’s bad.
(http://tldp.org/LDP/intro-linux/html/chap_01.html)
In the old days, every computer had a different operating system. Software for one didn’t work on another. Garrels writes, “In 1969, a team of developers in the Bell Labs laboratories started working on a solution for the software problem, to address these compatibility issues. They developed a new operating system, which was simple and elegant, written in the C programming language instead of in assembly code, [and] able to recycle code. The Bell Labs developers named their project "UNIX.””
Linux later became an implementation of UNIX, as Garrels writes, “Linus is a full UNIX clone, fit for use on workstations as well as on middle-range and high-end servers.”
There were a few things I found confusing. At times the writer mentioned some terms that were never explained, like he expected us to know what they were already. For example, what is comp.os.minix? And what exactly is POSIX? It never said. It said it’s a standard for UNIX but nothing more. It also said UNIX was gradually being called Linux, but why, exactly, if it’s essentially the same thing?
I know next to nothing about Linux, but from what I learned in this reading, it sounds like a more effective and usable operating system than Windows or Mac, as long as you understand how to use it and how it works. I would be hesitant about trying it myself, though, since it said that though progress is being made, it is not very user-friendly for beginners.
“What is Mac OS X?” By Amit Singh
(http://osxbook.com/book/bonus/ancient/whatismacosx//)
Having never owned a Mac, I found this article to be even more confusing than the Linux one. Since I’ve used them seldom and don’t really know anything about Macs, the names of all the programs are just names to me, and don’t really mean anything else. Maybe it’s because I’m used to Windows, but the whole Mac operating system just seems twice as complicated for me to understand. Open Firmware and Bootloader especially seemed tremendously complicated. I understand that they can be powerful tools, but I think you need to be an expert in order to be able to run them effectively.
However, I was pleasantly surprised to learn that Mach, which XNU was based on, originated as a research project at Carnegie Mellon University in the mid-80s.
Paul Thurott “An Update on the Windows Roadmap”
(http://community.winsupersite.com/blogs/paul/archive/2008/06/27/an-update-on-the-windows-roadmap.aspx)
I honestly don’t understand all the backlash Windows Vista has received lately. I recently got a Dell laptop with Vista, and so far it has given me no problems. Maybe it’s because I’m not a techie and don’t get exactly how differently Windows systems work from each other, but I’ve never found Vista to be particularly hard to use.
To sort of touch on the Windows vs. Mac debate, since our family got our first computer in 2001 (yes, we were latecomers!) that had Windows ME, none of us have ever had problems with any of our computers that were Windows’ fault. There were a couple crashes, but no unrecoverable memory loss. I have several friends who have worked with all sorts of computers for years, and according to them, if you have a Windows machine that continually crashes, it’s something you are doing wrong, and it’s not the program’s fault. I believe that as long as you use it smartly (not shutting down random services without knowing their function, running anti-virus programs that are configured to work the best for each particular machine, and configuring firewalls to match) a Windows PC will run reliably for years. That’s not to say I don’t like Macs or think that they are unreliable, but I think it’s a complete myth to say that they never screw up or crash. From what I understand, they can crash just as often as PCs and are beyond annoying to deal with when something goes wrong. And when something goes wrong, it’s bad.
Week 2 reading notes
I have to apologize for the extreme lateness in posting these - I got very confused about when each readings were due when and only recently got it straightened out. Week 3's reading notes will be up later today.
Notes on personal computer hardware:
(http://en.wikipedia.org/wiki/Computer_hardware)
Typical PC hardware includes:
Motherboard
-- Central Processing Unit (CPU)
-- Chipset
-- RAM
-- Basic Input Output System (BIOS)
-- Internal buses
-- External bus controllers
Power supply
-- Power cords, switch, cooling fan
Video display controller (graphics card)
Removable media devices (storage)
-- CD
-- DVD
-- Blu-Ray
-- USB flash drive
-- Tape drive
Internal storage
-- Hard disc
-- Solid-state drive
-- RAID array controller
Sound card
Input:
Text input devices
-- Keyboard
Pointing devices
-- Mouse
-- Optical Mouse
-- Trackball
Gaming devices
-- Joystick
-- Gamepad
-- Game controller
Image, Video input devices
-- Image scanner
-- Webcam
Audio input devices
-- Microphone
Though this entry did not go into great detail about descriptions of all of this hardware, it provided links to other Wikipedia entries which talked about them in more depth. For someone like me who does not know a lot about how the technical side of computers work, it was effective in helping me understand a bit more about it. A lot of it I had always sort of overlooked as common knowledge, but it was nice to see everything categorized and listed together, along with links to more in-depth descriptions.
Moore's law
(http://en.wikipedia.org/wiki/Moore%27s_law)
Moore’s Law basically states that since its invention in 1958, the number of transistors that can be placed on an integrated circuit doubles about every two years. He stated this in 1965 and so far this trend has continued to this day, and though he doesn’t expect it to last forever, it is not expected to stop for at least another five years.
When you’re looking ahead to the future, it gets to the point where you wonder how much more things like this can improve - like for example, are we going to get to the point where we can store terabytes or more of information on a computer the size of an iPod? I have a hard time imagining what more can be done, but this trend has continued for so long and with such consistency that I also can’t really imagine it tapering off anytime soon. Progress is made in technology so continuously that we are always being surprised and impressed with its improvements.
The Computer History Museum
(http://www.computerhistory.org/)
I've always been fascinated with comparing modern things to items of the past, discovering their origins and seeing how much they've changed, so I found this website to be extremely interesting. A lot of the technical electrical jargon went over my head, but it was still interesting to see how everything progressed over the course of the years. I particularly enjoyed the timeline of computer storage history, and looking over all of the old models of computers from the 30s to the early 90s.
Notes on personal computer hardware:
(http://en.wikipedia.org/wiki/Computer_hardware)
Typical PC hardware includes:
Motherboard
-- Central Processing Unit (CPU)
-- Chipset
-- RAM
-- Basic Input Output System (BIOS)
-- Internal buses
-- External bus controllers
Power supply
-- Power cords, switch, cooling fan
Video display controller (graphics card)
Removable media devices (storage)
-- CD
-- DVD
-- Blu-Ray
-- USB flash drive
-- Tape drive
Internal storage
-- Hard disc
-- Solid-state drive
-- RAID array controller
Sound card
Input:
Text input devices
-- Keyboard
Pointing devices
-- Mouse
-- Optical Mouse
-- Trackball
Gaming devices
-- Joystick
-- Gamepad
-- Game controller
Image, Video input devices
-- Image scanner
-- Webcam
Audio input devices
-- Microphone
Though this entry did not go into great detail about descriptions of all of this hardware, it provided links to other Wikipedia entries which talked about them in more depth. For someone like me who does not know a lot about how the technical side of computers work, it was effective in helping me understand a bit more about it. A lot of it I had always sort of overlooked as common knowledge, but it was nice to see everything categorized and listed together, along with links to more in-depth descriptions.
Moore's law
(http://en.wikipedia.org/wiki/Moore%27s_law)
Moore’s Law basically states that since its invention in 1958, the number of transistors that can be placed on an integrated circuit doubles about every two years. He stated this in 1965 and so far this trend has continued to this day, and though he doesn’t expect it to last forever, it is not expected to stop for at least another five years.
When you’re looking ahead to the future, it gets to the point where you wonder how much more things like this can improve - like for example, are we going to get to the point where we can store terabytes or more of information on a computer the size of an iPod? I have a hard time imagining what more can be done, but this trend has continued for so long and with such consistency that I also can’t really imagine it tapering off anytime soon. Progress is made in technology so continuously that we are always being surprised and impressed with its improvements.
The Computer History Museum
(http://www.computerhistory.org/)
I've always been fascinated with comparing modern things to items of the past, discovering their origins and seeing how much they've changed, so I found this website to be extremely interesting. A lot of the technical electrical jargon went over my head, but it was still interesting to see how everything progressed over the course of the years. I particularly enjoyed the timeline of computer storage history, and looking over all of the old models of computers from the 30s to the early 90s.
Thursday, September 10, 2009
Link to my Flickr photostream
Here is the link to my photostream on Flickr:
http://www.flickr.com/photos/42457576@N08/
I'll be updating it with my pictures for Assignment #2 in due time!
http://www.flickr.com/photos/42457576@N08/
I'll be updating it with my pictures for Assignment #2 in due time!
Muddiest Point for Week 2
I was wondering exactly how RAM worked - what kind of data is it that it stores and is lost when the computer is turned off? Is it like Internet cookies, or the Temporary Internet Files folder's contents on a PC? Or does it have more to do with actions you perform offline?
Tuesday, September 8, 2009
Week 1, Assignment 1
Notes gathered from “2004 Information Format Trends: Content, Not Containers”
(OCLC report: Information Format Trends: Content, Not Containers (2004). http://www.oclc.org/reports/2004format.htm)
This paper claims that content consumers generally don’t care what sort of form information content comes in, such as books, journals, or Web pages.
2. Mark Federman, “What is the Meaning of the Medium is the Message?” n.d., http://www.mcluhan.utoronto.ca/article_mediumisthemessage.htm (viewed July 18, 2004).
36. Primary Research Group, The Survey of Academic Libraries, 2004, Press release, PRWeb, March 2004, http://www.prweb.com/releases/2004/3/prweb112699.htm (viewed July 19, 2004).
53. Mani Shabrang, Dow Chemical Business Intelligence Center as quoted in Drew Robb, “Text Mining Tools Take On Unstructured Data,” Computerworld, June 21, 2004, n.p., http://www.computerworld.com/databasetopics/businessintelligence/story/ 0,10801,93968,00.html (viewed July 18, 2004).
~&~
Notes from “Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture”
(Clifford Lynch, “Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture” http://www.cni.org/staff/cliffpubs/info_and_IT_literacy.pdf)
In this paper, Lynch deals with the differences between information technology literacy and information literacy, and emphasizes the need for people of all walks of life to be well-versed and up-to-date in the skills needed to operate and understand information technology, as information technology tools become obsolete so quickly. He gives his view of the difference between information technology literacy and information literacy: he believes information technology literacy deals with understanding the technology tools that support everyday life, while information literacy deals with the content itself and communication. He also gives what he believes are the two general perspectives of information technology literacy: the first emphasizes skills in the use of information technology tools, while the second focuses on understanding how technologies and systems work.
(OCLC report: Information Format Trends: Content, Not Containers (2004). http://www.oclc.org/reports/2004format.htm)
This paper claims that content consumers generally don’t care what sort of form information content comes in, such as books, journals, or Web pages.
- According to Mark Federman of the McLuhan Program in Culture and Technology at the University of Toronto, the “message” of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs.2
- A recent study shows that almost 41 percent of the academic libraries sampled plan to “aggressively” reduce spending for print and increase expenditures for electronic resources.36
- What seems clear is that libraries should move beyond the role of collector and organizer of content, print and digital, to one that establishes the authenticity and provenance of content and provides the imprimatur of quality in an information rich but context-poor world. The challenge is how to do this. The best way to adapt is to understand what’s forcing the change.
- This new world is abundant and unstructured, but contextual mechanisms for navigating and synthesizing the information commons are scarce, even in—perhaps especially in—libraries. “We are drowning in information but are starving for knowledge. Information is only useful when it can be located and synthesized into knowledge.”53
2. Mark Federman, “What is the Meaning of the Medium is the Message?” n.d., http://www.mcluhan.utoronto.ca/article_mediumisthemessage.htm (viewed July 18, 2004).
36. Primary Research Group, The Survey of Academic Libraries, 2004, Press release, PRWeb, March 2004, http://www.prweb.com/releases/2004/3/prweb112699.htm (viewed July 19, 2004).
53. Mani Shabrang, Dow Chemical Business Intelligence Center as quoted in Drew Robb, “Text Mining Tools Take On Unstructured Data,” Computerworld, June 21, 2004, n.p., http://www.computerworld.com/databasetopics/businessintelligence/story/ 0,10801,93968,00.html (viewed July 18, 2004).
~&~
Notes from “Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture”
(Clifford Lynch, “Information Literacy and Information Technology Literacy: New Components in the Curriculum for a Digital Culture” http://www.cni.org/staff/cliffpubs/info_and_IT_literacy.pdf)
In this paper, Lynch deals with the differences between information technology literacy and information literacy, and emphasizes the need for people of all walks of life to be well-versed and up-to-date in the skills needed to operate and understand information technology, as information technology tools become obsolete so quickly. He gives his view of the difference between information technology literacy and information literacy: he believes information technology literacy deals with understanding the technology tools that support everyday life, while information literacy deals with the content itself and communication. He also gives what he believes are the two general perspectives of information technology literacy: the first emphasizes skills in the use of information technology tools, while the second focuses on understanding how technologies and systems work.
Wednesday, September 2, 2009
Hello!
This will be my blog for the Fall 2009 Introduction to Information Technology course (LIS 2600) at Pitt.
:)
:)
Subscribe to:
Posts (Atom)