2022-04-12T00:00:00Z
Copyright © 2022 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada and the United Kingdom.
ISBN: 978-1-119-86291-8
ISBN: 978-1-119-86293-2 (ebk.)
ISBN: 978-1-119-86292-5 (ebk.)
No part of this publication may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, electronic,
mechanical, photocopying, recording, scanning, or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright
Act, without either the prior written permission of the Publisher, or
authorization through payment of the appropriate per-copy fee to the
Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923,
(978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com
.
Requests to the Publisher for permission should be addressed to the
Permissions Department, John Wiley & Sons, Inc., 111 River Street,
Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permission
.
Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Website may provide or recommendations it may make. Further, readers should be aware the Internet Websites listed in this work may have changed or disappeared between when this work was written and when it is read.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats.
Some content that appears in print may not be available in electronic
formats. For more information about Wiley products, visit our web site
at www.wiley.com
.
Library of Congress Control Number: 2022931863
TRADEMARKS: WILEY, the Wiley logo, Sybex, and the Sybex logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without written permission. CompTIA and A+ are registered trademarks of CompTIA, Inc. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Cover image: © Getty Images Inc./Jeremy Woodhouse
Cover design: Wiley
For my girls.
—Quentin Docter
For my wife and son.
—Jon Buhagiar
As we were putting together this book, I was reminded of the proverb that begins “It takes a village….” That beginning definitely holds true for creating a book of this scope and size. From beginning to end, scores of dedicated professionals have focused on delivering the best book possible to you, the readers.
First, I need to thank my coauthor, Jon Buhagiar. I appreciate him diving in and dedicating himself to helping to produce an excellent book. I also need to give special thanks to our technical editor, Chris Crayton. He was meticulous and thorough, and challenged me to always find new and better ways to communicate complex concepts. His entire focus was on providing the best training material possible, and I doubt there's better in the business. Now, on to the rest of the team.
Kenyon Brown and Kim Wimpsett kept us on track and moving forward, which was a challenge at times. Saravanan Dakshinamurthy had the fun job of keeping us organized, which is akin to herding cats. Copyeditor Elizabeth Welch reminded me yet again that I am no master of the English language and saved me from butchering it (too badly). Many thanks also go out to our proofreader, Arielle Guy, and our indexer, Tom Dinse. Without their great contributions, this book would not have made it into your hands.
On a personal note, I need to thank my family. My girls are all incredibly supportive. Unfortunately, book writing as a side hustle while holding down a full-time job takes up a lot of time. I end up hiding in my office a lot, but they're always there for me, and I couldn't do it without them. Another huge thanks goes to my late grandpa, Joe, who got me into computers and taught me so many lessons I will never be able to repay. Finally, thanks to my friends who keep me relatively sane—Sean, Kurtis, Tim, John, Cory, and others—and laugh at me when I tell them I spent my weekend writing about the laser printer imaging process.
—Quentin Docter
I would like to first thank my coauthor, Quentin Docter. Throughout the writing of this book, he helped me with his insight and his expertise in writing technical books. Without his words of wisdom and guidance, this book would not be the product it stands to be. I would also like to give special thanks to our technical editor, Chris Crayton. His thorough review of the material helped to identify many areas for us to elaborate on and polish.
I would also like to thank the many people who made this book possible: Kenyon Brown at Wiley Publishing, for giving me the opportunity to write this book and work with this wonderful team; Kim Wimpsett, for keeping us on track during the writing process; Christine O'Connor, who also kept us on track and organized during the publishing process; and our copyeditor, Liz Welch, for helping me use proper English. I'd also like to thank the many other people I've never met but who worked behind the scenes to make this book a success.
During the writing of this book, many others in my life supported me, and I'd like to take this opportunity to thank them as well. First and foremost, thanks to my wife and son for their support during the many evenings and weekends spent in my office at the computer. Finally, thanks to my friend and coworker, Bill, for encouraging me daily with his insight and jokes, as well as to all my other coworkers and friends. Thank you.
—Jon Buhagiar
Quentin Docter (A+, Network+, IT Fundamentals+, Cloud Essentials +, MCSE, CCNA, SCSA) is an IT consultant who started in the industry in 1994. Since then, he's worked as a tech and network support specialist, trainer, consultant, and webmaster. He has written more than a dozen books for Sybex, including books on A+, IT Fundamentals+, Cloud Essentials+, Server+, Windows, and Solaris 9 certifications, as well as PC hardware and maintenance.
Jon Buhagiar (Network+, A+, CCNA, MCSA, MCSE, BS/ITM) is an information technology professional with two decades of experience in higher education. During the past 22 years he has been responsible for network operations at Pittsburgh Technical College and has led several projects, such as virtualization (server and desktop), VoIP, Microsoft 365, and many other projects supporting the quality of education at the college. He has achieved several certifications from Cisco, CompTIA, and Microsoft, and has taught many of the certification paths. He is the author of several books, including Sybex's CompTIA Network+ Review Guide: Exam N10-008 (2021) and CCNA Certification Practice Tests: Exam 200-301 (2020).
Chris Crayton is a technical consultant, trainer, author, and industry leading technical editor. He has worked as a computer technology and networking instructor, information security director, network administrator, network engineer, and PC specialist. Chris has authored several print and online books on PC repair, CompTIA A+, CompTIA Security+, and Microsoft Windows. He has also served as technical editor and content contributor on numerous technical titles for several of the leading publishing companies. He holds numerous industry certifications, has been recognized with many professional and teaching awards, and has served as a state-level SkillsUSA final competition judge.
EXERCISE 2.1 Removing an Internal Storage Device
EXERCISE 2.2 Installing an Internal Storage Device
EXERCISE 2.3 Removing a Power Supply
EXERCISE 3.1 Changing the Refresh Rate in Windows 10
EXERCISE 3.2 Changing the Settings for Multiple Monitors
EXERCISE 4.1 Identifying the Parts of an Inkjet Printer
EXERCISE 4.2 Installing a USB Printer in Windows 10
EXERCISE 4.3 Installing a TCP/IP Printer in Windows 10
EXERCISE 4.4 Determining if Bonjour Is Installed in Windows
EXERCISE 4.5 Scanning a Document to Google Drive
EXERCISE 4.6 Using an Inkjet Cleaning Solution
EXERCISE 4.7 Installing Memory into a Laser Printer
EXERCISE 5.1 Pricing Network Cables
EXERCISE 7.1 The Cost of Networking
EXERCISE 7.2 Installing an Internal NIC in Windows 10
EXERCISE 8.1 Configuring Windows 10 to Use a Proxy Server
EXERCISE 8.2 Using Google's Cloud Services
EXERCISE 8.3 Enabling Hyper-V in Windows 10
EXERCISE 8.4 Installing VirtualBox and Lubuntu on Windows 10
EXERCISE 9.1 Removing Speakers from a Laptop
EXERCISE 9.2 Removing the Display Assembly
EXERCISE 9.3 Removing the Display Panel
EXERCISE 9.4 Removing the Motherboard from a Laptop
EXERCISE 9.5 Replacing Laptop Memory
EXERCISE 9.6 Removing an M.2 SSD from a Laptop
EXERCISE 9.7 Removing a Laptop Keyboard
EXERCISE 9.8 Disabling a Touchpad in Windows 10
EXERCISE 9.9 Removing an Internal Laptop Battery
EXERCISE 9.10 Removing the System Fan
EXERCISE 9.11 Removing the CPU Heat Sink
EXERCISE 9.12 Removing the Wireless NIC
EXERCISE 9.13 Removing the CMOS Battery
EXERCISE 9.14 Flashing the System BIOS
EXERCISE 10.1 Connecting an iPhone to a Wi-Fi Network
EXERCISE 10.2 Connecting an Android Phone to a Wi-Fi Network
EXERCISE 10.3 Disabling Cellular Use for Data Networking on an iPhone
EXERCISE 10.4 Disabling Cellular Use for Data Networking in Android OS
EXERCISE 10.5 Setting Up a VPN in Android
EXERCISE 10.6 Pairing an Android Device with a Windows Laptop
EXERCISE 10.7 Pairing an iPhone with a Vehicle's Sound System
EXERCISE 10.8 Configuring Location Services in iOS
EXERCISE 10.9 Email Account Configuration on an iPhone
EXERCISE 10.10 Email Account Configuration in Android
EXERCISE 10.11 Enabling ActiveSync in iOS
EXERCISE 11.1 Troubleshooting Practice
EXERCISE 12.1 Using a S.M.A.R.T. Software Utility in Windows
EXERCISE 12.2 Stopping and Restarting the Print Spooler in Windows 10
EXERCISE 12.3 Renewing an IP Address in Windows 10
EXERCISE 12.4 Renewing an IP Address from the Command Line
EXERCISE 12.5 Using the net share Command in Windows
EXERCISE 13.1 Changing a Screen Saver in Windows
EXERCISE 13.2 Auto-Hiding the Taskbar
EXERCISE 13.3 Starting a Program from the Run Window
EXERCISE 14.1 Working with Task Manager
EXERCISE 14.2 Working with Performance Monitor
EXERCISE 14.3 Changing the Time Zone
EXERCISE 14.4 Showing Hidden Files and Folders
EXERCISE 15.1 Command-Line Directory Management
EXERCISE 15.2 Running chkdsk within Windows
EXERCISE 15.3 Running chkdsk at the Command Line
EXERCISE 16.1 Installing Applications on macOS
EXERCISE 16.2 Uninstalling Applications on macOS
EXERCISE 16.3 Working with Files
EXERCISE 17.1 Testing Your Antimalware
EXERCISE 17.2 Testing Social Engineering
EXERCISE 18.1 Examining a Security Token
EXERCISE 18.2 Examining File Permissions
EXERCISE 18.3 Working with File Hashes
EXERCISE 18.4 Setting the Passcode Lock on an iPhone
EXERCISE 18.5 Setting the Passcode Lock on an Android Phone
EXERCISE 19.1 Reviewing Reliability Monitor
EXERCISE 19.2 Manually Creating a Restore Point in Windows
EXERCISE 20.1 Creating and Running a Windows Batch Script
EXERCISE 20.2 Creating Your First PowerShell Script
EXERCISE 21.1 Finding Trip Hazards
Welcome to the CompTIA A+ Complete Study Guide. This is the fifth edition of our best-selling study guide for the A+ certification sponsored by CompTIA (Computing Technology Industry Association). Thank you for choosing us to help you on your journey toward certification!
This book is written at an intermediate technical level; we assume that you already know how to use a personal computer and its basic peripherals, such as USB devices and printers, but we also recognize that you may be learning how to service some of that computer equipment for the first time. The exams cover basic computer service topics as well as more advanced issues, and they cover topics that anyone already working as a technician should be familiar with. The exams are designed to test you on these topics in order to certify that you have enough knowledge to fix and upgrade some of the most widely used types of personal computers and operating systems.
In addition to the prose in the chapters, we've included a lot of extra material to help your study prep. At the end of each chapter is a list of exam essentials to know as well as 20 review questions to give you a taste of what it's like to take the exams. In addition, there are eight bonus exams of at least 50 questions each. Finally, there are flashcards designed to help your recall. Before you dive into those, though, we recommend you take the assessment test at the end of this introduction to gauge your current knowledge level.
Don't just study the questions and answers—the questions on the actual exams will be different from the practice ones included with this book. The exams are designed to test your knowledge of a concept or objective, so use this book to learn the objective behind the question. That said, we're confident that if you can do well on our quizzes, you will be well equipped to take the real exam.
This book covers more than just the exams, however. We believe in providing our students with a foundation of IT knowledge that will prepare them for real jobs, not just to pass a test. After all, life is not a multiple-choice test with the answers clearly laid out in front of you!
For experienced IT professionals, you can use the book to fill in the gaps in your current computer service knowledge. You may find, as many PC technicians have, that being well versed in all the technical aspects of hardware and operating systems is not enough to provide a satisfactory level of support—you must also have customer-relations skills, understand safety concepts, and be familiar with change management and environmental impacts and controls. We include helpful hints in all of these areas.
The A+ certification program was developed by CompTIA to provide an industry-wide means of certifying the competency of computer service technicians. The A+ certification is granted to those who have attained the level of knowledge and troubleshooting skills that are needed to provide capable support in the field of personal computers. It is similar to other certifications in the computer industry, such as the Cisco Certified Technician (CCT) program and the Microsoft Technology Associate (MTA) certification program. The theory behind these certifications is that if you need to have service performed on any of their products, you would sooner call a technician who has been certified in one of the appropriate certification programs than just call the first “expert” in the phone book.
The A+ certification program was created to offer a wide-ranging certification, in the sense that it is intended to certify competence with personal computers and mobile devices from many different makers/vendors. You must pass two tests to become A+ certified:
You don't have to take the 220-1101 and the 220-1102 exams at the same time. However, the A+ certification is not awarded until you've passed both tests.
There are several good reasons to get your A+ certification. The CompTIA Candidate's Information packet lists five major benefits:
The A+ certification is a status symbol in the computer service industry. Organizations that include members of the computer service industry recognize the benefits of A+ certification and push for their members to become certified. And more people every day are putting the “A+ Certified Technician” emblem on their business cards.
A+ certification makes individuals more marketable to potential employers. A+ certified employees also may receive a higher base salary because employers won't have to spend as much money on vendor-specific training.
Most raises and advancements are based on performance. A+ certified employees work faster and more efficiently and are thus more productive. The more productive employees are, the more money they make for their company. And, of course, the more money they make for the company, the more valuable they are to the company. So, if an employee is A+ certified, their chances of being promoted are greater.
Most major computer hardware vendors recognize A+ certification. Some of these vendors apply A+ certification toward prerequisites in their own respective certification programs, which has the side benefit of reducing training costs for employers.
As the A+ Certified Technician moniker becomes better known among computer owners, more of them will realize that the A+ technician is more qualified to work on their computer equipment than a noncertified technician.
A+ certification is available to anyone who passes the tests. You don't have to work for any particular company. It's not a secret society. It is, however, an elite group. To become A+ certified, you must do two things:
The exams can be taken at any Pearson VUE testing center. If you pass both exams, you will get a certificate in the mail from CompTIA saying that you have passed, and you will also receive a lapel pin and business card.
To register for the tests, go to www.pearsonvue.com/comptia
.
You'll be asked for your name, Social Security number (an optional
number may be assigned if you don't wish to provide your Social Security
number), mailing address, phone number, employer, when and where you
want to take the test, and your credit card number. (Payment
arrangements must be made at the time of registration.)
Here are some general tips for taking your exam successfully:
www.comptia.org
.If you are one of the many people who want to pass the A+ exams, and pass them confidently, then you should buy this book and use it to study for the exams.
This book was written to prepare you for the challenges of the real IT world, not just to pass the A+ exams. This study guide will do that by describing in detail the concepts on which you'll be tested.
This book covers everything you need to know to pass the CompTIA A+ exams.
Part I of the book starts at Chapter 1 and concludes after Chapter 12. It covers all the topics on which you will be tested for Exam 220-1101:
Part II of the book, Chapters 13–22, covers all the topics on which you will be tested for Exam 220-1102:
We've included several learning tools throughout the book:
The interactive online learning environment that accompanies CompTIA A+ Complete Study Guide: Exam 220-1101 and Exam 220-1102 provides a test bank with study tools to help you prepare for the certification exams and increase your chances of passing them the first time! The test bank includes the following elements:
If you want a solid foundation for preparing for the A+ exams, this is the book for you. We've spent countless hours putting together this book with the intention of helping you prepare for the exams.
This book is loaded with valuable information, and you will get the most out of your study time if you understand how we put the book together. Here's a list that describes how to approach studying:
The A+ exams consist of the Core 1 220-1101 exam and the Core 2 220-1102 exam. Following are the detailed exam objectives for each test.
Exam objectives are subject to change at any time without prior
notice and at CompTIA's sole discretion. Please visit the A+
Certification page of CompTIA's website (comptia.org/certifications/a
)
for the most current listing of exam objectives.
The following table lists the domains measured by this examination and the extent to which they are represented on the exam:
Domain | Percentage of exam |
---|---|
1.0 Mobile Devices | 15% |
2.0 Networking | 20% |
3.0 Hardware | 25% |
4.0 Virtualization and Cloud Computing | 11% |
5.0 Hardware and Network Troubleshooting | 29% |
Total | 100% |
The following table lists where you can find the objectives covered in this book.
Objective | Chapter(s) |
---|---|
1.0 Mobile Devices | |
1.1 Given a scenario, install and configure laptop
hardware and components.
|
9 |
1.2 Compare and contrast
the display components of mobile devices.
|
9 |
1.3 Given a scenario, set up and configure accessories
and ports of mobile devices.
|
9 |
1.4 Given a scenario,
configure basic mobile-device network connectivity and application
support.
|
2.0 Networking | |
2.1 Compare and contrast Transmission Control Protocol
(TCP) and User Datagram Protocol (UDP) ports, protocols, and their
purposes.
|
6 |
2.2 Compare and contrast common networking hardware.
|
5 |
2.3 Compare and contrast protocols for wireless networking.
|
7 |
2.4 Summarize services provided by networked hosts.
|
8 |
2.5 Given a scenario, install and configure basic
wired/wireless small office/home office (SOHO) networks.
|
7 |
2.6 Compare and contrast common network configuration concepts.
|
6 |
2.7 Compare and contrast Internet connection types,
network types, and their features.
|
5, 7 |
2.8 Given a scenario, use networking tools.
|
12 |
3.0 Hardware | |
3.1 Explain basic cable types and their connectors, features, and
purposes.
|
3, 5 |
3.2 Given a scenario, install the appropriate RAM.
|
1 |
3.3 Given a scenario, select and install storage
devices.
|
2 |
3.4 Given a scenario,
install and configure motherboards, central processing units (CPUs), and
add-on cards.
|
2 |
3.5 Given a scenario, install or replace the
appropriate power supply.
|
2 |
3.6 Given a scenario,
deploy and configure multifunction devices/printers and settings.
|
4 |
3.7 Given a scenario, install
and replace printer consumables.
|
4 |
4.0 Virtualization and Cloud Computing | |
4.1 Summarize cloud-computing concepts.
|
8 |
4.2 Summarize aspects of client-side virtualization.
|
8 |
5.0 Hardware and Network Troubleshooting | |
5.1 Given a scenario, apply the best practice
methodology to resolve problems.
|
11 |
5.2 Given a scenario, troubleshoot problems related to motherboards,
RAM, CPU, and power.
|
11 |
5.3 Given a scenario, troubleshoot and diagnose
problems with storage drives and RAID arrays.
|
12 |
5.4 Given a scenario, troubleshoot video, projector, and display
issues.
|
12 |
5.5 Given a scenario, troubleshoot common issues with
mobile devices.
|
12 |
5.6 Given a scenario, troubleshoot and resolve printer issues.
|
12 |
5.7 Given a scenario, troubleshoot problems with wired
and wireless networks.
|
12 |
The following table lists the domains measured by this examination and the extent to which they are represented on the exam.
Domain | Percentage of exam |
---|---|
1.0 Operating Systems | 31% |
2.0 Security | 25% |
3.0 Software Troubleshooting | 22% |
4.0 Operational Procedures | 22% |
Total | 100% |
The following table lists where you can find the objectives covered in the book.
Objective | Chapter(s) |
---|---|
1.0 Operating Systems | |
1.1 Identify basic features of Microsoft Windows
editions.
|
13 |
1.2 Given a scenario, use
the appropriate Microsoft command-line tool.
|
15 |
1.3 Given a scenario, use
features and tools of the Microsoft Windows 10 operating system (OS).
|
14 |
1.4 Given a scenario, use
the appropriate Microsoft Windows 10 Control Panel utility.
|
14 |
1.5 Given a scenario, use
the appropriate Windows settings.
|
14 |
1.6 Given a scenario, configure Microsoft Windows networking
features on a client/desktop.
|
15 |
1.7 Given a scenario, apply application installation and
configuration concepts.
|
13 |
1.8 Explain common OS
types and their purposes.
|
13, 14 |
1.9 Given a scenario, perform OS installations and upgrades in a
diverse OS environment.
|
14, 15 |
1.10 Identify common features and tools of the macOS/desktop OS.
|
16 |
1.11 Identify common
features and tools of the Linux client/desktop OS.
|
16 |
2.0 Security | |
2.1 Summarize various
security measures and their purposes.
|
17 |
2.2 Compare and contrast wireless security protocols
and authentication methods.
|
18 |
2.3 Given a scenario, detect, remove, and prevent malware using the
appropriate tools and methods.
Boot sector virus Cryptominers
|
17 |
2.4 Explain common social-engineering attacks, threats, and
vulnerabilities.
|
17 |
2.5 Given a scenario, manage and configure basic security settings
in the Microsoft Windows OS.
|
18 |
2.6 Given a scenario, configure a workstation to meet
best practices for security.
|
17 |
2.7 Explain common methods
for securing mobile and embedded devices.
BYOD vs. corporate owned Profile security requirements
|
18 |
2.8 Given a scenario, use common data destruction and
disposal methods.
|
17 |
2.9 Given a scenario,
configure appropriate security settings on small office/home office
(SOHO) wireless and wired networks.
|
18 |
2.10 Given a scenario, install and configure browsers and relevant
security settings.
|
18 |
3.0 Software Troubleshooting | |
3.1 Given a scenario, troubleshoot common Windows OS
problems.
|
19 |
3.2 Given a scenario, troubleshoot common personal
computer (PC) security issues.
|
19 |
3.3 Given a scenario, use best practice procedures for
malware removal.
|
19 |
3.4 Given a scenario,
troubleshoot common mobile OS and application issues.
|
19 |
3.5 Given a scenario, troubleshoot common mobile OS and
application security issues.
|
19 |
4.0 Operational Procedures | |
4.1 Given a scenario, implement best practices
associated with documentation and support systems information
management.
|
22 |
4.2 Explain basic
change-management best practices.
|
22 |
4.3 Given a scenario, implement workstation backup and
recovery methods.
|
22 |
4.4 Given a scenario,
use common safety procedures.
|
21 |
4.5 Summarize environmental impacts and local
environmental controls.
|
21 |
4.6 Explain the importance of prohibited content/activity and
privacy, licensing, and policy concepts.
|
21 |
4.7 Given a scenario, use proper communication techniques and
professionalism.
|
22 |
4.8 Identify the basics of scripting.
|
20 |
4.9 Given a scenario, use
remote access technologies.
|
20 |
sysprep.exe
winload.exe
BOOTMGR
winresume.exe
.ipa
file extensions
typically associated?
msconfig.exe
//comment
'comment
REM comment
# comment
msra.exe
mstsc.exe
quickassist.exe
ssh.exe
SYSDM.CPL
) allows you to change the computer name and join
the system to a domain. Device Manager is used to manage hardware
resources. The User Accounts applet is used to manage user accounts.
Credential Manager is used to manage stored credentials. See Chapter 14 for more information.sysprep.exe
utility allows you to ready the operating
system for imaging by resetting specific information, such as the
computer name. The Microsoft Deployment Toolkit can assist in creating
the steps, but it calls on the sysprep
tool. The Windows
Assessment and Deployment Kit allows you to customize the Windows
operating system for imaging, but it does not ready the operating system
for imaging. Windows Imaging (WIM) is a file format to contain the
image. See Chapter 15 for more
information.BOOTMGR
) to load the operating system from a specific
partition. winload.exe
loads the operating system kernel.
BOOTMGR
is the initial bootstrap program that reads the
BCD. winresume.exe
is used when resuming a previous session
that has been suspended. See Chapter 15 for
more information..ipa
file
extension is for iOS app store package files, and it is therefore
associated with iOS. Android apps have an extension of
.apk
. Windows 10 uses .exe
. Blackberry OS uses
an extension of .jad
. The latter two phone types were not
discussed in detail in this book. See Chapter
18 for more information.time.windows.com
as an
NTP. The availability of the NTP server of time.windows.com
is not
any different than physical machines. When deploying virtual machines,
the physical RTC is not shared; each VM gets an emulated RTC. See Chapter 19 for more information.msconfig.exe
tool,
they cannot be restarted with the tool; they can only be enabled or
disabled on startup. The Windows Recovery Environment (WinRE) is used to
troubleshoot and repair problems offline. Resource Monitor cannot be
used to restart services. See Chapter 19 for
more information.REM comment
is used to comment Windows batch script
code. The line //comment
is used to comment JavaScript
code. The line 'comment
is used to comment VBScript code.
The line # comment
is used to comment Bash script code and
PowerShell code. See Chapter 20 for more
information.mstsc.exe
will launch the Remote Desktop Connection
utility. From this utility you can remotely connect to a server or other
workstation. The command msra.exe
launches the Microsoft
Remote Assistance utility to allow a trusted helper to remote in to
help. The command quickassist.exe
will launch the Quick
Assist remote assistance utility to allow an assistant to remote in to
help. The command ssh.exe
launches the Secure Shell client
that allows you to connect to a Linux/UNIX server or networking
equipment. See Chapter 20 for more
information.
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
The computers we use daily, from the largest servers to the smallest smart watches and everything in between, are collections of different electronic components and software working together in a system. Digital computers have been around since the late 1930s, so they aren't new news. As you would imagine though, their looks and functionality have evolved considerably since then.
As technology improved over the years, computers got smaller and faster, and inventors added features that required new hardware devices. Inspired by typewriters, keyboards were added for ease of input. Visual displays required a monitor and a video card and a standard interface between them. Sound was provided by a different expansion card and speakers. Because of the newly added features, PCs were modular by necessity. That is, if a new feature or functionality were needed, a new component could be added. Or if a part failed, it could be replaced by a new one. Along the way in the late 1960s, the term personal computer (PC) was coined to differentiate between computers designed to be used by one person versus other options, such as a mainframe or one where multiple users share a processor. This book, and the CompTIA A+ exams, focus on PC hardware and software.
Much of the computing industry today is focused on smaller devices, such as laptops, tablets, and smartphones. Laptops have outsold desktop computers since 2005, and it seems that everyone has a smartphone glued to their hand today. Smaller devices require the same components as do their bigger desktop-sized cousins. Of course, the components are smaller and many times integrated into the same circuit board. The functionality of the individual parts is still critical, though, so what you learn here will serve you well regardless of the type of device you're working on.
Even though all parts inside a computer case are important, some are more important than others. You'll sometimes hear people refer to the “big three” computer parts, which are the motherboard, processor, and memory. Without these, a computer won't work, whereas if a sound card fails, it's probably not going to make the entire system inoperable. In this chapter, you will learn how to identify, install, and configure the big three, which are critical to computer functionality. You'll also learn about cooling systems, because too much heat will cause components to melt, which could make for a very bad day for the computer user.
The spine of the computer is the motherboard, otherwise known as the system board or mainboard. This is the printed circuit board (PCB), which is a conductive series of pathways laminated to a nonconductive substrate that lines the bottom of the computer and is often of a uniform color, such as green, brown, blue, black, or red. It is the most important component in the computer because it connects all the other components together. Figure 1.1 shows a typical PC system board, as seen from above. All other components are attached to this circuit board. On the system board, you will find the central processing unit (CPU) slot or integrated CPU, underlying circuitry, expansion slots, video components, random access memory (RAM) slots, and a variety of other chips. We will be discussing each of these components throughout this book.
There are hundreds, if not thousands, of different motherboards in the market today. It can be overwhelming trying to figure out which one is needed. To help, think of motherboards in terms of which type they are, based on a standard classification. System boards are classified by their form factor (design), such as ATX and ITX. Exercise care and vigilance when acquiring a motherboard and components separately. Motherboards will have different expansion slots, support certain processors and memory, and fit into some cases but not others. Be sure that the other parts are physically compatible with the motherboard you choose.
Intel developed the Advanced Technology eXtended (ATX) motherboard in the mid-1990s to improve upon the classic AT-style motherboard architecture that had ruled the PC world for many years. The ATX motherboard has the processor and memory slots at right angles to the expansion cards, like the one in Figure 1.1. This arrangement puts the processor and memory in line with the fan output of the power supply, allowing the processor to run cooler. And because those components are not in line with the expansion cards, you can install full-length expansion cards—adapters that extend the full length of the inside of a standard computer case—in an ATX motherboard machine. ATX (and its derivatives, such as micro-ATX) is the primary PC motherboard form factor in use today. Standard ATX motherboards measure 12″ × 9.6″ (305 mm × 244 mm).
FIGURE 1.1 A typical motherboard
The Information Technology eXtended (ITX) line of motherboard form factors was developed by VIA Technologies in the early 2000s as a low-power, small form factor (SFF) board for specialty uses, including home-theater systems, compact desktop systems, gaming systems, and embedded components. ITX itself is not an actual form factor but a family of form factors. The family consists of the following form factors:
The mini-ITX motherboard has four mounting holes that line up with three or four of the holes in the ATX and micro-ATX form factors. In mini-ITX boards, the rear interfaces are placed in the same location as those on the ATX motherboards. These features make mini-ITX boards compatible with ATX cases. This is where the mounting compatibility ends, because despite the PC compatibility of the other ITX form factors, they are used in embedded systems, such as set-top boxes, home entertainment systems, and smartphones, and lack the requisite mounting and interface specifications. Figure 1.2 shows the three larger forms of ITX motherboards, next to two ATX motherboards for comparison.
FIGURE 1.2 ITX motherboards
VIA Mini-ITX Form Factor Comparison by VIA Gallery from Hsintien, Taiwan; VIA Mainboards Form Factor Comparison uploaded by Kozuch, licensed under CC BY 2.0 via Commons
Now that you understand the basic types of motherboards and their form factors, it's time to look at the key characteristics and components of the motherboard and, where applicable, their locations relative to each other. The following list summarizes key concepts you need to know about motherboards:
In the following sections, you will learn about some of the most common components of a motherboard, what they do, and where they are located on the motherboard. We'll show what each component looks like so that you can identify it on most any motherboard that you run across. In the case of some components, this chapter provides only a brief introduction, with more detail to come in later chapters.
In a PC, data is sent from one component to another via a bus, which is a common collection of signal pathways. In the very early days, PCs used serial buses, which sent one bit at a time and were painfully slow. Brilliant engineers realized that they could redesign the bus and send 8 bits at a time (over synchronized separate lines), which resulted in a big speed increase. This was known as a parallel bus.
The downside of parallel communications is the loss of circuit length (how long the circuit could be) and throughput (how much data could move at one time). The signal could travel only a short distance, and the amount of data was limited due to the careful synchronization needed between separate lines, the speed of which must be controlled to limit skewing the arrival of the individual signals at the receiving end.
What was once old is new again, as engineers have discovered methods to make serial transmissions work at data rates that are many times faster than parallel signals. Therefore, nearly everything you see today uses a serial bus. The only limitation of serial circuits is in the capability of the transceivers, which tends to grow over time at a refreshing rate due to technical advancements. Examples of specifications that have heralded the dominance of serial communications are Serial Advanced Technology Attachment (Serial ATA, or SATA), Universal Serial Bus (USB), IEEE 1394/FireWire, and Peripheral Component Interconnect Express (PCIe).
On a motherboard, several different buses are used. Expansion slots of various architectures, such as PCIe, are included to allow for the insertion of external devices or adapters. Other types of buses exist within the system to allow communication between the CPU, RAM, and other components with which data must be exchanged. Except for CPU slots and sockets and memory slots, there are no insertion points for devices in many closed bus systems because no adapters exist for such an environment.
The various buses throughout a given computer system can be rated by their bus speeds. The higher the bus speed, the higher its performance. In some cases, various buses must be synchronized for proper performance, such as the system bus and any expansion buses that run at the front-side bus speed. Other times, one bus will reference another for its own speed. The internal bus speed of a CPU is derived from the front-side bus clock, for instance. The buses presented throughout this chapter are accompanied by their speeds, where appropriate.
A chipset is a collection of chips or circuits that perform interface and peripheral functions for the processor. This collection of chips is usually the circuitry that provides interfaces for memory, expansion cards, and onboard peripherals, and it generally dictates how a motherboard will communicate with the installed peripherals.
Chipsets are usually given a name and model number by the original manufacturer. For example, B550 and X570 are chipsets that support Advanced Micro Devices, Inc. (AMD) processors, and Z490 and H410 are Intel motherboard chipsets. Typically, the manufacturer and model tell you that your particular chipset has a certain set of features (for example, type of CPU and RAM supported, type and brand of onboard video, and so on). Don't worry about memorizing any chipset names—you can look them up online to understand their features.
Chipsets can be made up of one or several integrated circuit chips. Intel-based motherboards, for example, typically use two chips. To know for sure, you must check the manufacturer's documentation, especially because cooling mechanisms frequently obscure today's chipset chips, sometimes hindering visual brand and model identification.
Chipsets can be divided into two major functional groups, called Northbridge and Southbridge. Let's take a brief look at these groups and the functions they perform.
The Northbridge subset of a motherboard's chipset is the set of circuitry or chips that performs one very important function: management of high-speed peripheral communications. The Northbridge is responsible primarily for communications with integrated video using PCIe, for instance, and processor-to-memory communications. Therefore, it can be said that much of the true performance of a PC relies on the specifications of the Northbridge component and its communications capability with the peripherals it controls.
The communications between the CPU and memory occur over what is known as the front-side bus (FSB), which is just a set of signal pathways connecting the CPU and main memory, for instance. The clock signal that drives the FSB is used to drive communications by certain other devices, such as PCIe slots, making them local-bus technologies. The back-side bus (BSB), if present, is a set of signal pathways between the CPU and external cache memory. The BSB uses the same clock signal that drives the FSB. If no back-side bus exists, cache is placed on the front-side bus with the CPU and main memory.
The Northbridge is directly connected to the Southbridge (discussed next). It controls the Southbridge and helps to manage the communications between the Southbridge and the rest of the computer.
The Southbridge subset of the chipset is responsible for providing support to the slower onboard peripherals (USB, Serial and Parallel ATA, parallel ports, serial ports, and so on), managing their communications with the rest of the computer and the resources given to them. These components do not need to keep up with the external clock of the CPU and do not represent a bottleneck in the overall performance of the system. Any component that would impose such a restriction on the system should eventually be developed for FSB attachment.
In other words, if you're considering any component other than the CPU, memory and cache, or PCIe slots, the Southbridge is in charge. Most motherboards today have integrated USB, network, and analog and digital audio ports for the Southbridge to manage, for example, all of which are discussed in more detail later in this chapter or in Chapter 3, “Peripherals, Cables, and Connectors.” The Southbridge is also responsible for managing communications with the slower expansion buses, such as PCI, and legacy buses.
Figure 1.3 is a photo of the chipset of a motherboard, with the heat sink of the Northbridge at the top left, connected to the heat-spreading cover of the Southbridge at the bottom right.
FIGURE 1.3 A modern computer chipset
Figure 1.4 shows a schematic of a typical motherboard chipset (both Northbridge and Southbridge) and the components with which they interface. Notice which components interface with which parts of the chipset.
FIGURE 1.4 A schematic of a typical motherboard chipset
The most visible parts of any motherboard are the expansion slots. These are small plastic slots, usually from 1 to 6 inches long and approximately ½ inch wide. As their name suggests, these slots are used to install various devices in the computer to expand its capabilities. Some expansion devices that might be installed in these slots include video, network, sound, and disk interface cards.
If you look at the motherboard in your computer, you will more than likely see one of the main types of expansion slots used in computers today, which are PCI and PCIe. In the following sections, we will cover how to visually identify the different expansion slots on the motherboard.
It's now considered an old technology, but many motherboards in use today still contain 32-bit Peripheral Component Interconnect (PCI) slots. They are easily recognizable because they are only around 3 inches long and classically white, although modern boards take liberties with the color. PCI slots became extremely popular with the advent of Pentium-class processors in the mid-1990s. Although popularity has shifted from PCI to PCIe, the PCI slot's service to the industry cannot be ignored; it has been an incredibly prolific architecture for many years.
PCI expansion buses operate at 33 MHz or 66 MHz (version 2.1) over a 32-bit (4-byte) channel, resulting in data rates of 133 MBps and 266 MBps, respectively, with 133 MBps being the most common, server architectures excluded. PCI is a shared-bus topology, however, so mixing 33 MHz and 66 MHz adapters in a 66 MHz system will slow all adapters to 33 MHz. Older servers might have featured 64-bit PCI slots as well, since version 1.0, which double the 32-bit data rates. See the sidebar “Arriving at the Exact Answer” for help with understanding the math involved in frequencies and bit rates.
PCI slots and adapters are manufactured in 3.3V and 5V versions. Universal adapters are keyed to fit in slots based on either of the two voltages. The notch in the card edge of the common 5V slots and adapters is oriented toward the front of the motherboard, and the notch in the 3.3V adapters toward the rear. Figure 1.5 shows several PCI expansion slots. Note the 5V 32-bit slot in the foreground and the 3.3V 64-bit slots. Also notice that a universal 32-bit card, which has notches in both positions, is inserted into and operates fine in the 64-bit 3.3V slot in the background.
FIGURE 1.5 PCI expansion slots
The most common expansion slot architecture that is being used by motherboards is PCI Express (PCIe). It was designed to be a replacement for PCI, as well as an older video card standard called accelerated graphics port (AGP). PCIe has the advantage of being faster than AGP while maintaining the flexibility of PCI. PCIe has no plug compatibility with either AGP or PCI. Some modern PCIe motherboards can be found with regular PCI slots for backward compatibility, but AGP slots have not been included for many years.
PCIe is casually referred to as a bus architecture to simplify its comparison with other bus technologies. True expansion buses share total bandwidth among all slots, each of which taps into different points along the common bus lines. In contrast, PCIe uses a switching component with point-to-point connections to slots, giving each component full use of the corresponding bandwidth and producing more of a star topology versus a bus. Furthermore, unlike other expansion buses, which have parallel architectures, PCIe is a serial technology, striping data packets across multiple serial paths to achieve higher data rates.
PCIe uses the concept of lanes, which are the switched point-to-point signal paths between any two PCIe components. Each lane that the switch interconnects between any two intercommunicating devices comprises a separate pair of wires for both directions of traffic. Each PCIe pairing between cards requires a negotiation for the highest mutually supported number of lanes. The single lane or combined collection of lanes that the switch interconnects between devices is referred to as a link.
There are seven different link widths supported by PCIe, designated x1 (pronounced “by 1”), x2, x4, x8, x12, x16, and x32, with x1, x4, and x16 being the most common. The x8 link width is less common than these but more common than the others. A slot that supports a particular link width is of a physical size related to that width because the width is based on the number of lanes supported, requiring a related number of wires. As a result, an x8 slot is longer than an x1 slot but shorter than an x16 slot. Every PCIe slot has a 22-pin portion in common toward the rear of the motherboard, which you can see in Figure 1.7, in which the rear of the motherboard is to the left. These 22 pins comprise mostly voltage and ground leads. (The PCIe slots are the longer and lighter ones in Figure 1.6.)
FIGURE 1.6 PCIe expansion slots
Four major versions of PCIe are currently available in the market: 1.x, 2.x, 3.0, and 4.0. For the four versions, a single lane, and therefore an x1 slot, operates in each direction (or transmits and receives from either communicating device's perspective), at a data rate of 250 MBps (almost twice the rate of the most common PCI slot), 500 MBps, approximately 1 GBps, and roughly 2 GBps, respectively.
An associated bidirectional link has a nominal throughput of double these rates. Use the doubled rate when comparing PCIe to other expansion buses because those other rates are for bidirectional communication. This means that the 500 MBps bidirectional link of an x1 slot in the first version of PCIe was comparable to PCI's best, a 64-bit slot running at 66 MHz and producing a throughput of 533 MBps.
Combining lanes simply results in a linear multiplication of these rates. For example, a PCIe 1.1 x16 slot is capable of 4 GBps of throughput in each direction, 16 times the 250 MBps x1 rate. Bidirectionally, this fairly common slot produces a throughput of 8 GBps. Each subsequent PCIe specification doubles this throughput. The aforementioned PCIe 5.0 will produce bidirectional throughput of approximately 128 GBps, which is faster than some DDR4 standards (which is to say, it's really, really fast).
Because of its high data rate, PCIe is the current choice of gaming aficionados. Additionally, technologies similar to NVIDIA's Scalable Link Interface (SLI) allow such users to combine preferably identical graphics adapters in appropriately spaced PCIe x16 slots with a hardware bridge to form a single virtual graphics adapter. The job of the bridge is to provide non-chipset communication among the adapters. The bridge is not a requirement for SLI to work, but performance suffers without it. SLI-ready motherboards allow two, three, or four PCIe graphics adapters to pool their graphics processing units (GPUs) and memory to feed graphics output to a single monitor attached to the adapter acting as the primary SLI device. SLI implementation results in increased graphics performance over single-PCIe and non-PCIe implementations.
Refer back to Figure 1.6, which is a photo of an SLI-ready motherboard with three PCIe x16 slots (every other slot, starting with the top one), one PCIe x1 slot (second slot from the top), and two PCI slots (first and third slots from the bottom). Notice the latch and tab that secures the x16 adapters in place by their hooks. Any movement of these high-performance devices can result in temporary failure or poor performance.
Memory, or random access memory (RAM), slots are the next most notable slots on a motherboard. These slots are designed for the modules that hold memory chips that make up primary memory, which is used to store currently used data and instructions for the CPU. Many types of memory are available for PCs today. In this chapter, you will become familiar with the appearance and specifications of the slots on the motherboard so that you can identify them and appropriately install or replace RAM.
For the most part, PCs today use memory chips arranged on a small circuit board. A dual in-line memory module (DIMM) is one type of circuit board. Today's DIMMs differ in the number of conductors, or pins, that each particular physical form factor uses. Some common examples include 168-, 184-, 240-, and 288-pin configurations. In addition, laptop memory comes in smaller form factors known as small outline DIMMs (SODIMMs) and MicroDIMMs. More detail on memory packaging and the technologies that use them can be found later in this chapter in the section “Understanding Memory.”
Memory slots are easy to identify on a motherboard. Classic DIMM slots were usually black and, like all memory slots, were placed very close together. DIMM slots with color-coding are more common these days, however. The color-coding of the slots acts as a guide to the installer of the memory. See the section “Single-, Dual-, Triple-, and Quad-Channel Memory” later in this chapter for more on the purpose of this color-coding. Consult the motherboard's documentation to determine the specific modules allowed as well as their required orientation. The number of memory slots varies from motherboard to motherboard, but the structure of the different slots is similar. Metal pins in the bottom make contact with the metallic pins on each memory module. Small metal or plastic tabs on each side of the slot keep the memory module securely in its slot. Figure 1.8 shows four memory slots, with the CPU socket included for reference.
FIGURE 1.8 Double Data Rate (DDR) memory slots
Sometimes, the amount of primary memory installed is inadequate to
service additional requests for memory resources from newly launched
applications. When this condition occurs, the user may receive an “out
of memory” error message and an application may fail to launch. One
solution for this is to use the hard drive as additional RAM. This space
on the hard drive is known as a swap file or a paging
file. The technology in general is known as virtual memory
or virtual RAM. The paging file is called PAGEFILE.SYS in
modern Microsoft operating systems. It is an optimized space that can
deliver information to RAM at the request of the memory controller
faster than if it came from the general storage pool of the drive. It's
located at c:\pagefile.sys
by default. Note that virtual
memory cannot be used directly from the hard drive; it must be paged
into RAM as the oldest contents of RAM are paged out to the hard drive
to make room. The memory controller, by the way, is the chip that
manages access to RAM as well as adapters that have had a few hardware
memory addresses reserved for their communication with the
processor.
Nevertheless, relying too much on virtual memory (check your page fault statistics in the Reliability and Performance Monitor) results in the entire system slowing down noticeably. An inexpensive and highly effective solution is to add physical memory to the system, thus reducing its reliance on virtual memory. More information on virtual memory and its configuration can be found in Chapter 13, “Operating System Basics.”
Another type of memory common in PCs is cache memory, which is small and fast and logically sits between the CPU and RAM. Cache is a very fast form of memory forged from static RAM, which is discussed in detail in the section “Understanding Memory” later in this chapter. Cache improves system performance by predicting what the CPU will ask for next and prefetching this information before being asked. This paradigm allows the cache to be smaller in size than the RAM itself. Only the most recently used data and code or that which is expected to be used next is stored in cache.
You'll see three different cache designations:
Level 1 Cache L1 cache is the smallest and fastest, and it's on the processor die itself. In other words, it's an integrated part of the manufacturing pattern that's used to stamp the processor pathways into the silicon chip. You can't get any closer to the processor than that.
Though the definition of L1 cache has not changed much over the years, the same is not true for other cache levels. L2 and L3 cache used to be on the motherboard but now have moved on-die in most processors as well. The biggest differences are the speed and whether they are shared.
The typical increasing order of capacity and distance from the processor die is L1 cache, L2 cache, L3 cache, RAM, and HDD/SSD (hard disk drive and solid-state drive—more on these in Chapter 2). This is also the typical decreasing order of speed. The following list includes representative capacities of these memory types. The cache capacities are for each core of the 10th generation Intel Core i7 processor. The other capacities are simply modern examples.
One way to find out how much cache your system has is to use a
utility such as CPU-Z, as shown in Figure 1.9. CPU-Z is freeware that can
show you the amount of cache, processor name and number, motherboard and
chipset, and memory specifications. It can be found at www.cpuid.com
.
FIGURE 1.9 Cache in a system
The “brain” of any computer is the central processing unit (CPU). There's no computer without a CPU. There are many different types of processors for computers—so many, in fact, that you will learn about them later in this chapter in the section “Understanding Processors.”
Typically, in today's computers, the processor is the easiest component to identify on the motherboard. It is usually the component that has either a fan or a heat sink (usually both) attached to it (as shown in Figure 1.10). These devices are used to draw away and disperse the heat that a processor generates. This is done because heat is the enemy of microelectronics. Today's processors generate enough heat that, without heat dispersal, they would permanently damage themselves and the motherboard in a matter of minutes, if not seconds.
FIGURE 1.10 Two heat sinks, one with a fan
CPU sockets are almost as varied as the processors that they hold. Sockets are basically flat and have several columns and rows of holes or pins arranged in a square, as shown in Figure 1.11. The left socket is known as Socket AM4, made for AMD processors such as the Ryzen, and has holes to receive the pins on the CPU. This is known as a pin grid array (PGA) arrangement for a CPU socket. The holes and pins are in a row/column orientation, an array of pins. The right socket is known as LGA 1200, and there are spring-loaded pins in the socket and a grid of lands on the CPU. The land grid array (LGA) is a newer technology that places the delicate pins (yet more sturdy than those on chips) on the cheaper motherboard instead of on the more expensive CPU, opposite to the way that the aging PGA does. The device with the pins has to be replaced if the pins become too damaged to function. The PGA and LGA are mentioned again later in this chapter in the section “Understanding Processors.”
FIGURE 1.11 CPU socket examples
Modern CPU sockets have a mechanism in place that reduces the need to apply considerable force to the CPU to install a processor, which was necessary in the early days of personal computing. Given the extra surface area on today's processors, excessive pressure applied in the wrong manner could damage the CPU packaging, its pins, or the motherboard itself. For CPUs based on the PGA concept, zero insertion force (ZIF) sockets are exceedingly popular. ZIF sockets use a plastic or metal lever on one of the two lateral edges to lock or release the mechanism that secures the CPU's pins in the socket. The CPU rides on the mobile top portion of the socket, and the socket's contacts that mate with the CPU's pins are in the fixed bottom portion of the socket. The image of Socket AM4 shown on the left in Figure 1.11 illustrates the ZIF locking mechanism at the right edge of the socket.
For processors based on the LGA concept, a socket with a different locking mechanism is used. Because there are no receptacles in either the motherboard or the CPU, there is no opportunity for a locking mechanism that holds the component with the pins in place. LGA-compatible sockets, as they're called despite the misnomer, have a lid that closes over the CPU and is locked in place by an L-shaped arm that borders two of the socket's edges. The nonlocking leg of the arm has a bend in the middle that latches the lid closed when the other leg of the arm is secured. The right image in Figure 1.11 shows an LGA socket with no CPU installed and the locking arm secured over the lid's tab, along the bottom edge.
Listing out all the desktop PC socket types you might encounter would take a long time. Instead, we'll give you a sampling of some that you might see. The first thing you might notice is that sockets are made for Intel or AMD processors, but not both. Keep that compatibility in mind when replacing a motherboard or a processor. Make sure that the processor and motherboard were designed for each other (even within the Intel or AMD families); otherwise, they won't fit each other and won't work. Table 1.1 lists some common desktop socket/CPU relationships. Servers and laptops/tablets generally have different sockets altogether, although some CPU sockets will support processors designed for desktops or servers.
Socket | Released | Type | Processors |
---|---|---|---|
LGA 1200 | 2020 | LGA | Intel Comet Lake and Rocket Lake |
Socket AM4 | 2017 | PGA | AMD Ryzen 3, Ryzen 5, Ryzen 7, Ryzen 9, Athlon 200GE |
Socket TR4 | 2017 | LGA | AMD Ryzen Threadripper |
LGA 2066 (Socket R4) | 2017 | LGA | Intel Skylake-X and Kaby Lake-X |
LGA 1151 (Socket H4) | 2015 | LGA | Intel Skylake, Kaby Lake, and Coffee Lake |
Socket FM2+ | 2014 | PGA | AMD Kaveri and Godavari |
Socket AM1 | 2014 | PGA | AMD Athlon and Sempron |
LGA 1150 (Socket H3) | 2013 | LGA | Intel Haswell, Haswell Refresh, and Broadwell |
Socket FM2 | 2012 | PGA | AMD Trinity |
TABLE 1.1 Desktop PC socket types and the processors they support
Name (Year) | Gen | Socket | Core i9 | Core i7 | Core i5 | Core i3 |
---|---|---|---|---|---|---|
Alder Lake (2021) | 12th | LGA 1700 | 129xx | 127xx | 126xx | n/a |
Rocket Lake (2020) | 11th | LGA 1200 | 119xx | 117xx | 116xx | n/a |
115xx | ||||||
114xx | ||||||
Comet Lake (2019) | 10th | LGA 1200 | 109xx | 107xx | 106xx | 103xx |
105xx | 101xx | |||||
104xx |
TABLE 1.2 Select Intel desktop processors
When it comes to motherboard compatibility, the two biggest things to keep in mind are the processor type and the case. If either of those are misaligned with what the motherboard supports, you're going to have problems.
Thus far, as we've talked about desktop motherboards and their CPU sockets, we have shown examples of boards that have just one socket. There are motherboards that have more than one CPU socket and conveniently, they are called multisocket (typically written as two words) motherboards. Figure 1.12 shows a two-socket motherboard made by GIGABYTE. The two CPU sockets are easily identifiable and note that each CPU socket has eight dedicated memory slots.
FIGURE 1.12 GIGABYTE multisocket motherboard
Trying to categorize server motherboards can be a bit challenging. Servers are expected to do a lot more work than the average PC, so it makes sense that servers need more powerful hardware. Servers can, and quite often do, make do with a single processor on a “normal” PC motherboard. At the same time, there are motherboards designed specifically for servers that support multiple processors (two and four sockets are common) and have expanded memory and networking capabilities as well. Further, while server motherboards are often ATX-sized, many server manufacturers create custom boards to fit inside their chassis. Regardless, multisocket and server motherboards will generally use the same CPU sockets that other motherboards use.
In small mobile devices, space is at a premium. Some manufacturers will use standard small-factor motherboards, but most create their own boards to fit inside specific cases. An example of an oddly shaped Dell laptop motherboard is shown in Figure 1.13. When replacing a laptop motherboard, you almost always need to use one from the exact same model, otherwise it won't fit inside the case.
FIGURE 1.13 Dell laptop motherboard
Nearly all laptop processors are soldered onto the motherboard, so you don't have to worry about CPU socket compatibility. If the CPU dies, you replace the entire motherboard. We will cover laptop components more extensively in Chapter 9, “Laptop and Mobile Device Hardware.”
In addition to these sockets and slots on the motherboard, a special connector (the 24-pin white block connector shown in Figure 1.14) allows the motherboard to be connected to the power supply to receive power. This connector is where the ATX power connector (mentioned in Chapter 2 in the section “Understanding Power Supplies”) plugs in.
FIGURE 1.14 An ATX power connector on a motherboard
Nearly all users store data, and the most widely used data storage device is a hard drive. Hard drives are great because they store data even when the device is powered off, which explains why they are sometimes referred to as nonvolatile storage. There are multiple types of hard drives, and we'll get into them in more detail in Chapter 2. Of course, those drives need to connect to the motherboard, and that's what we'll cover here.
At one time, integrated drive electronics (IDE) drives were the most common type of hard drive found in computers. Though often thought of in relation to hard drives, IDE was much more than a hard drive interface; it was also a popular interface for many other drive types, including optical drives and tape drives. Today, we call it IDE Parallel ATA (PATA) and consider it to be a legacy technology. Figure 1.15 shows two PATA interfaces; you can see that one pin in the center is missing (as a key) to ensure that the cable gets attached properly. The industry now favors Serial ATA instead.
FIGURE 1.15 Two PATA hard drive connectors
Serial ATA (SATA) began as an enhancement to the original ATA specifications, also known as IDE and, today, PATA. Technology is proving that the orderly progression of data in a single-file path is superior to placing multiple bits of data in parallel and trying to synchronize their transmission to the point that all of the bits arrive simultaneously. In other words, if you can build faster transceivers, serial transmissions are simpler to adapt to the faster rates than are parallel transmissions.
The first version of SATA, known as SATA 1.5 Gbps (and also by the less-preferred terms SATA I and SATA 150), used an 8b/10b-encoding scheme that requires 2 non-data overhead bits for every 8 data bits. The result is a loss of 20 percent of the rated bandwidth. The silver lining, however, is that the math becomes quite easy. Normally, you have to divide by 8 to convert bits to bytes. With 8b/10b encoding, you divide by 10. Therefore, the 150 MBps throughput for which this version of SATA was nicknamed is easily derived as 1/10 of the 1.5 Gbps transfer rate. The original SATA specification also provided for hot swapping at the discretion of the motherboard and drive manufacturers.
Similar math works for SATA 3 Gbps, tagged as SATA II and SATA 300, and SATA 6 Gbps, which you might hear called SATA III or SATA 600. Note that each subsequent version doubles the throughput of the previous version. Figure 1.16 shows four SATA headers on a motherboard that will receive the data cable. Note that identifiers silkscreened onto motherboards often enumerate such headers. The resulting numbers are not related to the SATA version that the header supports. Instead, such numbers serve to differentiate headers from one another and to map to firmware identifiers, often visible within the BIOS configuration utility.
Another version of SATA that you will see is external SATA (eSATA). As you might expect based upon the name, this technology was developed for devices that reside outside of the case, not inside it. Many motherboards have an eSATA connector built in. If not, you can buy an expansion card that has eSATA ports and plugs into internal SATA connectors. Figure 1.17 shows an example of how the two ports are different. Finally, SATA and eSATA standards are compatible. In other words, SATA 6 Gbps equals eSATA 6 Gbps.
FIGURE 1.16 Four SATA headers
FIGURE 1.17 SATA (left) and eSATA (right) cables and ports
The most recent development in expansion connections is M.2 (pronounced “M dot 2”). So far it's primarily used for hard drives, but other types of devices, such as Wi-Fi, Bluetooth, Global Positioning System (GPS), and near-field communication (NFC) adapters are built for M.2 as well. We will cover M.2 in more depth in Chapter 2, thanks to how important it is to storage solutions.
It's important to call out that M.2 is a form factor, not a bus standard. The form factor supports existing SATA, USB, and PCIe buses. This means that if you hook up a SATA device to an M.2 slot (with the appropriate connector), the device speed will be regulated by SATA standards. Figure 1.18 shows two M.2 connectors on a motherboard.
FIGURE 1.18 Two M.2 slots
Photo credit: Andrew Cunningham/Ars Technica
From the time of the very first personal computer, there has been a minimum expectation as to the buttons and LEDs that should be easily accessible to the user. At first, they generally appeared on the front of the case. In today's cases, buttons and LEDs have been added and placed on the top of the case or on a beveled edge between the top and the front. They have also been left on the front or have been used in a combination of these locations. These buttons and lights, as well as other external connectors, plug into the motherboard through a series of pins known as headers. Examples of items that are connected using a header include:
Headers for different connections are often spread throughout different locations on the motherboard—finding the right one can sometimes be a frustrating treasure hunt. Other headers are grouped together. For example, most of the headers for the items on the front or top panel of the case are often co-located. The purpose for the header will be printed on the motherboard, and while that may tell you what should connect there, it often lacks detail in how it should be connected. The motherboard manufacturer's website is a good place to go if you need a detailed diagram or instructions. Figure 1.19 shows several headers on a motherboard. On the left is a USB header, then a system fan header in the center, and a block of front panel headers on the right, including the hard drive light, reset button, chassis intrusion detector, and power light.
FIGURE 1.19 Motherboard headers
Users expect a power button to use to turn the computer on. (These were on the side or back of very early PCs.) The soft power feature available through the front power button, which is no more than a relay, allows access to multiple effects through the contact on the motherboard, based on how long the button is pressed. These effects can be changed through the BIOS or operating system. Users also expect a power light, often a green LED, to assure them that the button did its job.
The reset button appeared as a way to reboot the computer from a cold startup point without removing power from the components. Keeping the machine powered tends to prolong the life of the electronics affected by power cycling. Pressing the reset button also gets around software lockups because the connection to the motherboard allows the system to restart from the hardware level. One disadvantage to power cycling is that certain circuits, such as memory chips, might need time to drain their charge for the reboot to be completely successful. This is why there is always a way to turn the computer off as well.
In the early days of personal computing, the hard disk drive's LED had to be driven by the drive itself. Before long, the motherboard was equipped with drive headers, so adding pins to drive the drive activity light was no issue. These days, all motherboards supply this connectivity. The benefit of having one LED for all internal drives is that all the drives are represented on the front panel when only one LED is provided. The disadvantage might be that you cannot tell which drive is currently active. This tends to be a minor concern because you often know which drive you've accessed. If you haven't intentionally accessed any drive, it's likely the drive that holds the operating system or virtual-memory swap file is being accessed by the system itself. In contrast, external drives with removable media, such as optical drives, supply their own activity light on their faceplate.
Early generations of optical drives had to have a special cable attached to the rear of the drive, which was then attached to the sound card if audio CDs were to be heard through the speakers attached to the sound card. Sound emanating from a CD-ROM running an application, such as a game, did not have to take the same route and could travel through the same path from the drive as general data. The first enhancement to this arrangement came in the form of a front 3.5 mm jack on the drive's faceplate that was intended for headphones but could also have speakers connected to it. The audio that normally ran across the special cable was rerouted to the front jack when something was plugged into it.
Many of today's motherboards have 10-position pin headers designed to connect to standardized front-panel audio modules. Some of these modules have legacy AC'97 analog ports on them, whereas others have high-definition (HD) audio connections. Motherboards that accommodate both have a BIOS setting that enables you to choose which header you want to activate, with the HD setting most often being the default.
So many temporarily attached devices feature USB connectivity, such as USB keys (flash drives) and cameras, that front-panel connectivity is a must. Finding your way to the back of the system unit for a brief connection is hardly worth the effort in some cases. For many years, motherboards have supplied one or more 10-position headers for internal connectivity of front-panel USB ports. Because this header size is popular for many connectors, only 9 positions tend to have pins protruding, while the 10th position acts as a key, showing up in different spots for each connector type to discourage the connection of the wrong cable. Figure 1.20 shows USB headers on a motherboard. The labels “USB56” and “USB78” indicate that one block serves ports 5 and 6, while the other serves ports 7 and 8, all of which are arbitrary, based on the manufacturer's numbering convention. In each, the upper left pin is “missing,” which is the key.
FIGURE 1.20 Two motherboard USB headers
Firmware is the name given to any software that is encoded in hardware, usually a read-only memory (ROM) chip, and it can be run without extra instructions from the operating system. Most computers, large printers, and devices with no operating system use firmware in some sense. The best example of firmware is a computer's Basic Input/Output System (BIOS), which is burned into a chip. Also, some expansion cards, such as SCSI cards and graphics adapters, use their own firmware utilities for setting up peripherals.
The BIOS chip, also referred to as the ROM BIOS chip, is one of the most important chips on the motherboard. This special memory chip contains the BIOS system software that boots the system and allows the operating system to interact with certain hardware in the computer in lieu of requiring a more complex device driver to do so. The BIOS chip is easily identified: If you have a brand-name computer, this chip might have on it the name of the manufacturer and usually the word BIOS. For clones, the chip usually has a sticker or printing on it from one of the major BIOS manufacturers (AMI, Phoenix, Award, Winbond, and others). On later motherboards, the BIOS might be difficult to identify or it might even be integrated into the Southbridge, but the functionality remains regardless of how it's implemented.
The successor to the BIOS is the Unified Extensible Firmware Interface (UEFI). The extensible features of the UEFI allow for the support of a vast array of systems and platforms by allowing the UEFI access to system resources for storage of additional modules that can be added at any time. In the following section, you'll see how a security feature known as Secure Boot would not be possible with the classic BIOS. It is the extensibility of the UEFI that makes such technology feasible.
Figure 1.21 gives you an idea of what a modern BIOS/UEFI chip might look like on a motherboard. Despite the 1998 copyright on the label, which refers only to the oldest code present on the chip, this particular chip can be found on motherboards produced as late as 2009. Notice also the Reset CMOS jumper at the lower left and its configuration silkscreen at the upper left. You might use this jumper to clear the CMOS memory, discussed shortly, when an unknown password, for example, is keeping you out of the BIOS/UEFI configuration utility. The jumper in the photo is in the clear position, not the normal operating position. System bootup is typically not possible in this state.
FIGURE 1.21 A BIOS chip on a motherboard
At a basic level, the BIOS/UEFI controls system boot options such as the sequence of drives from which it will look for operating system boot files. The boot sequence menu from a BIOS/UEFI is shown in Figure 1.22. Other interface configuration options will be available too, such as enabling or disabling integrated ports or an integrated video card. A popular option on corporate computers is to disable the USB ports, which can increase security and decrease the risk of contracting a virus.
FIGURE 1.22 BIOS boot sequence
Most BIOS/UEFI setup utilities have more to offer than a simple interface for making selections and saving the results. For example, these utilities often offer diagnostic routines that you can use to have the BIOS/UEFI analyze the state and quality of the same components that it inspects during bootup, but at a much deeper level.
Consider the scenario where a computer is making noise and overheating. You can use the BIOS/UEFI configuration utility to access built-in diagnostics to check the rotational speed of the motherboard fans. If the fans are running slower than expected, the noise could be related to the bearings of one or more fans, causing them to lose speed and, thus, cooling capacity.
There is often also a page within the utility that gives you access to such bits of information as current live readings of the temperature of the CPU and the ambient temperature of the interior of the system unit. On such a page, you can set the temperature at which the BIOS/UEFI sounds a warning tone and the temperature at which the BIOS/UEFI shuts down the system to protect it. You can also monitor the instantaneous fan speeds, bus speeds, and voltage levels of the CPU and other vital landmarks to make sure that they are all within acceptable ranges. You might also be able to set a lower fan speed threshold at which the system warns you. In many cases, some of these levels can be altered to achieve such phenomena as overclocking, which is using the BIOS/UEFI to set the system clock higher than what the CPU is rated for, or undervolting, which is lowering the voltage of the CPU and RAM, which reduces power consumption and heat production.
The BIOS/UEFI has always played a role in system security. Since the early days of the personal computer, the BIOS allowed the setting of two passwords—the user (or boot) password and the supervisor/administrator, or access, password. The boot password is required to leave the initial power-on screens and begin the process of booting an operating system. The administrator password is required before entering the BIOS/UEFI configuration utility. It is always a good idea to set the administrator password, but the boot password should not be set on public systems that need to boot on their own, in case of an unforeseen power cycle.
In more recent years, the role of the BIOS/UEFI in system security has grown substantially. BIOS/UEFI security has been extended to a point where the operating system is ready to take it over. The BIOS/UEFI was a perfect candidate to supervise security and integrity in a platform-independent way. Coupled with the Trusted Platform Module (TPM), a dedicated security coprocessor, or cryptoprocessor, the BIOS can be configured to boot the system only after authenticating the boot device. This authentication confirms that the hardware being booted to has been tied to the system containing the BIOS/UEFI and TPM, a process known as sealing. Sealing the devices to the system also prohibits the devices from being used after removing them from the system. For further security, the keys created can be combined with a PIN or password that unlocks their use or with a USB flash drive that must be inserted before booting.
Microsoft's BitLocker uses the TPM to encrypt the entire drive. Normally, only user data can be encrypted, but BitLocker encrypts operating-system files, the Registry, the hibernation file, and so on, in addition to those files and folders that file-level encryption secures. If any changes have occurred to the Windows installation, the TPM does not release the keys required to decrypt and boot to the secured volume. TPM is configured in Windows under Start ➢ Settings ➢ Update & Security ➢ Windows Security ➢ Device security, as shown in Figure 1.23.
Most motherboards come with a TPM chip installed, but if they don't, it's not possible to add one. In those situations, you can enable the same functionality by using a hardware security module (HSM). An HSM is a security device that can manage, create, and securely store encryption keys—it enables users to safely encrypt and decrypt data. An HSM can take a few different forms. The simplest is a USB or PCIe device that plugs into a system. It could be set up for file encryption and decryption, required for the computer to boot, or both. For large-scale solutions, HSM-enabled servers can provide crypto services to an entire network.
When a certain level of UEFI is used, the system firmware can also check digital signatures for each boot file it uses to confirm that it is the approved version and has not been tampered with. This technology is known as Secure Boot. An example of a BIOS/UEFI's boot security screen is shown in Figure 1.24. The boot files checked include option ROMs (defined in the following section), the boot loader, and other operating-system boot files. Only if the signatures are valid will the firmware load and execute the associated software.
FIGURE 1.23 Windows TPM configuration screen
FIGURE 1.24 Secure boot in UEFI
The problem can now arise that a particular operating system might not be supported by the database of known-good signatures stored in the firmware. In such a situation, the system manufacturer can supply an extension that the UEFI can use to support that operating system—a task not possible with traditional BIOS-based firmware.
Some BIOS firmware can monitor the status of a contact on the motherboard for intrusion detection. If the feature in the BIOS is enabled and the sensor on the chassis is connected to the contact on the motherboard, the removal of the cover will be detected and logged by the BIOS. This can occur even if the system is off, thanks to the CMOS battery. At the next bootup, the BIOS will notify you of the intrusion. No notification occurs over subsequent boots unless additional intrusion is detected.
A major function of the BIOS/UEFI is to perform a process known as a power-on self-test (POST). POST is a series of system checks performed by the system BIOS/UEFI and other high-end components, such as the SCSI BIOS and the video BIOS, known collectively as option ROMs. Among other things, the POST routine verifies the integrity of the BIOS/UEFI itself. It also verifies and confirms the size of primary memory. During POST, the BIOS also analyzes and catalogs other forms of hardware, such as buses and boot devices, as well as manages the passing of control to the specialized BIOS/UEFI routines mentioned earlier. The BIOS/UEFI is responsible for offering the user a key sequence to enter the configuration routine as POST is beginning. Finally, once POST has completed successfully, the BIOS/UEFI selects the boot device highest in the configured boot order and executes the master boot record (MBR) or similar construct on that device so that the MBR can call its associated operating system's boot loader and continue booting up.
The POST process can end with a beep code or displayed code that indicates the issue discovered. Each BIOS/UEFI publisher has its own series of codes that can be generated. Figure 1.25 shows a simplified POST display during the initial boot sequence of a computer.
FIGURE 1.25 An example of a system POST screen
Your PC has to keep certain settings when it's turned off and its power cord is unplugged:
Consider a situation where you added a new graphics adapter to your desktop computer, but the built-in display port continues to remain active, prohibiting the new interface from working. The solution might be to alter your BIOS/UEFI configuration to disable the internal graphics adapter, so that the new one will take over. Similar reconfiguration of your BIOS/UEFI settings might be necessary when overclocking—or changing the system clock speed—is desired, or when you want to set BIOS/UEFI-based passwords or establish TPM-based whole-drive encryption, as with Microsoft's BitLocker. While not so much utilized today, the system date and time can be altered in the BIOS/UEFI configuration utility of your system; once, in the early days of personal computing, the date and time actually might have needed to be changed this way.
Your PC keeps these settings in a special memory chip called the complementary metal oxide semiconductor (CMOS) memory chip. Actually, CMOS (usually pronounced see-moss) is a manufacturing technology for integrated circuits. The first commonly used chip made from CMOS technology was a type of memory chip, the memory for the BIOS/UEFI. As a result, the term CMOS stuck and is the accepted name for this memory chip.
The BIOS/UEFI starts with its own default information and then reads information from the CMOS, such as which hard drive types are configured for this computer to use, which drive(s) it should search for boot sectors, and so on. Any overlapping information read from the CMOS overrides the default information from the BIOS/UEFI. A lack of corresponding information in the CMOS does not delete information that the BIOS knows natively. This process is a merge, not a write-over. CMOS memory is usually not upgradable in terms of its capacity and might be integrated into the BIOS/UEFI chip or the Southbridge.
To keep its settings, integrated circuit-based memory must have power constantly. When you shut off a computer, anything that is left in this type of memory is lost forever. The CMOS manufacturing technology produces chips with very low power requirements. As a result, today's electronic circuitry is more susceptible to damage from electrostatic discharge (ESD). Another ramification is that it doesn't take much of a power source to keep CMOS chips from losing their contents.
To prevent CMOS from losing its rather important information, motherboard manufacturers include a small battery called the CMOS battery to power the CMOS memory, shown in the bottom-left corner of Figure 1.26. The batteries come in different shapes and sizes, but they all perform the same function. Most CMOS batteries look like large watch batteries or small cylindrical batteries. Today's CMOS batteries are most often of a long-life, non-rechargeable lithium chemistry.
FIGURE 1.26 CMOS battery
Now that you've learned the basics of the motherboard, you need to learn about the most important component on the motherboard: the CPU. The role of the CPU, or central processing unit, is to control and direct all the activities of the computer using both external and internal buses. From a technical perspective, the job of the CPU is to process, or do math, on large strings of binary numbers—0s and 1s. It is a processor chip consisting of an array of millions of transistors. Intel and Advanced Micro Devices, Inc. (AMD) are the two largest PC-compatible CPU manufacturers. Their chips were featured in Table 1.1 during the discussion of the sockets into which they fit.
Today's AMD and Intel CPUs should be compatible with every PC-based operating system and application in the market. It's possible that you could run into an app that doesn't work quite right on an AMD chip, but those cases are exceedingly rare. From a compatibility standpoint, the most important thing to remember is that the motherboard and processor need to be made for each other. The rest of the hardware plugs into the motherboard and will be CPU brand agnostic.
Older CPUs are generally square, with contacts arranged in a pin grid array (PGA). Prior to 1981, chips were found in a rectangle with two rows of 20 pins known as a dual in-line package (DIP)—see Figure 1.27. There are still integrated circuits that use the DIP form factor; however, the DIP form factor is no longer used for PC CPUs. Most modern CPUs use the LGA form factor. Figure 1.11, earlier in this chapter, shows an LGA socket next to a PGA socket. Additionally, the ATX motherboard in Figure 1.2 has a PGA socket, whereas the micro ATX motherboard has an LGA.
FIGURE 1.27 DIP and PGA
Intel and AMD both make extensive use of an inverted socket/processor combination of sorts. As mentioned earlier, the LGA packaging calls for the pins to be placed on the motherboard, while the mates for these pins are on the processor packaging. As with PGA, LGA is named for the landmarks on the processor, not the ones on the motherboard. As a result, the grid of metallic contact points, called lands, on the bottom of the CPU gives this format its name.
You can easily identify which component inside the computer is the CPU because it is a large square lying flat on the motherboard with a very large heat sink and fan (refer to Figure 1.10). The CPU is almost always located very close to the RAM to improve system speed, as shown in Figure 1.1, Figure 1.2, and Figure 1.8.
As noted in the previous section, the functional job of the processor is to do math on very large strings of 0s and 1s. How the CPU goes about doing that depends upon its architecture. For commonly used processors, there are two major categories—those based on Complex Instruction Set Computing (CISC) and those based on Reduced Instruction Set Computer (RISC).
CISC (pronounced like disk, but with a “c”) and RISC (pronounced risk) are examples of an instruction set architecture (ISA). Essentially, it's the set of commands that the processor can execute. Both types of chips, when combined with software, can ultimately perform all the same tasks. They just go about it differently. When programmers develop code, they develop it for a CISC or a RISC platform.
As the CISC name implies, instructions sent to the computer are relatively complex (as compared to RISC), and as such they can do multiple mathematical tasks with one instruction, and each instruction can take several clock cycles to complete. We'll talk more about CPU speeds in the “CPU Characteristics” section later, but for now, know that if a CPU is advertised as having 3.8 GHz speed, that means it can complete roughly 3.8 billion cycles in one second. The core of a processor can only do one thing at a time—it just does them very, very quickly so it looks like it's multitasking.
CISC was the original ISA for microprocessors, and the most well-known example of CISC technology is the x64/x86 platform popularized by Intel. AMD processors are CISC chips as well. So where did the terms x64 and x86 come from? First, just a bit more theory.
There is a set of data lines between the CPU and the primary memory of the system—remember the bus? The most common bus today is 64 bits wide, although there are still some 32-bit buses kicking around out there. In older days, buses could theoretically be as narrow as 2 bits, and 8-bit and 16-bit buses were popular for CPUs for several years. The wider the bus, the more data that can be processed per unit of time, and hence, more work can be performed. Internal registers in the CPU might be only 32 bits wide, but with a 64-bit system bus, two separate pipelines can receive information simultaneously. For true 64-bit CPUs, which have 64-bit internal registers and can run x64 versions of Microsoft operating systems, the external system data bus will always be 64 bits wide or some larger multiple thereof.
In the last paragraph we snuck in the term x64, and by doing that we also defined it. It refers to processors that are designed to work with 64 bits of data at a time. To go along with it, the operating system must also be designed to work with x64 chips.
Contrast that with processors that can handle only 32 bits of information at once. Those are referred to as x86 processors. You might look at that last sentence and be certain that we made a typo, but we didn't. For a long time when 32-bit processors were the fastest on the PC market, Intel was the dominant player. Their CPUs had names like 80386 (aka i386) and 80486 (i486) and were based on the older 16-bit 80286 and 8086. Since the i386 and i486 were the most popular standards, the term x86 sprung up to mean a 32-bit architecture. So even though it may seem counterintuitive due to the numbers, x64 is newer and faster than x86.
Moving into the RISC architecture, the primary type of processor used today is known as an Advanced RISC Machine (ARM) CPU. Depending on who you talk to and which sources you prefer, there are conflicting stories on if that's actually the right acronym, as ARM is also known as an Acorn RISC Machine. Regardless of what it stands for, ARM is a competing technology to Intel and AMD x64-based CPUs.
Based on the RISC acronym, one might think that the reduced set of instructions the processor can perform makes it inferior somehow, but that's not the case. Tasks just need to get executed in different ways. To use a human example, let's say that we tell you to add the number 7 to itself seven times. One way to do that is to use one step of multiplication: 7 × 7 equals 49. But what if we said you can't use multiplication? You can still get to the answer by using addition. It will just take you seven steps instead of one. Same answer, different process. That's kind of how RISC compares to CISC.
RISC processors have some advantages over their CISC counterparts. They can be made smaller than CISC chips and they produce less heat, making them ideal for mobile devices. In fact, nearly all smartphones use RISC-based chips, such as Apple's A15 and Samsung's Exynos series processors. On the downside, RISC processors use more memory than CISC ones do because it takes more code to complete a task with a RISC chip.
Like x64/x86, ARM processors have evolved over time—64-bit implementations are the most current, and they are designated ARM64; 32-bit versions are known simply as ARM.
Older processors were single-core, meaning that there was one set of instruction pathways through the processor. Therefore, they could process one set of tasks at a time. Designers then figured out how to speed up the computer by creating multiple cores within one processor package. Each core effectively operates as its own independent processor, provided that the operating system and applications are able to support multicore technology. (Nearly all do.)
Today, almost all desktop CPUs in the market are multicore. The number of cores you want may determine the processor to get. For example, the 10th-generation Intel Core i7 has eight cores whereas the i5 has six.
When looking for a processor, you might have several decisions to make. Do you want an Intel or AMD CPU? Which model? How fast should it be? What features does it need to support? In this section, we will take a look at some characteristics of processor performance.
The speed of the processor is generally described in clock frequency. Older chips were rated in megahertz (MHz) and new chips in gigahertz (GHz). Since the dawn of the personal computer industry, motherboards have included oscillators, quartz crystals shaved down to a specific geometry so that engineers know exactly how they will react when a current is run through them. The phenomenon of a quartz crystal vibrating when exposed to a current is known as the piezoelectric effect. The crystal (XTL) known as the system clock keeps the time for the flow of data on the motherboard. How the front-side bus uses the clock leads to an effective clock rate known as the FSB speed. As discussed in the section “Types of Memory” later in this chapter, the FSB speed is computed differently for different types of RAM (DDR3, DDR4, DDR5, and so forth). From here, the CPU multiplies the FSB speed to produce its own internal clock rate, producing the third speed mentioned thus far.
As a result of the foregoing tricks of physics and mathematics, there can be a discrepancy between the front-side bus frequency and the internal frequency that the CPU uses to latch data and instructions through its pipelines. This disagreement between the numbers comes from the fact that the CPU is capable of splitting the clock signal it receives from the external oscillator that drives the front-side bus into multiple regular signals for its own internal use. In fact, you might be able to purchase a number of processors rated for different (internal) speeds that are all compatible with a single motherboard that has a front-side bus rated, for instance, at 1,333 MHz. Furthermore, you might be able to adjust the internal clock rate of the CPU that you purchased through settings in the BIOS.
The speed of a processor can also be tweaked by overclocking, or running the processor at a higher speed than the one at which the manufacturer rated it. Running at a higher speed requires more voltage and also generates more heat, which can shorten the life of the CPU. Manufacturers often discourage the practice (of course, they want you to just buy a faster and more expensive CPU), and it usually voids any warranty. However, some chips are sold today that specifically give you the ability to overclock them. Our official recommendation is to not do it unless the manufacturer says it's okay. If you're curious, plenty of information on how to overclock is available online.
The string of instructions that a CPU runs is known as a thread. Old processors were capable of running only one thread at a time, whereas newer ones can run multiple threads at once. This is called multithreading.
Intel markets their multithreading technology as Hyper-Threading Technology (HTT). HTT is a form of simultaneous multithreading (SMT). SMT takes advantage of a modern CPU's superscalar architecture. Superscalar processors can have multiple instructions operating on separate data in parallel.
HTT-capable processors appear to the operating system to be two processors. As a result, the operating system can schedule two processes at the same time, as in the case of symmetric multiprocessing (SMP), where two or more processors use the same system resources. In fact, the operating system must support SMP in order to take advantage of HTT. If the current process stalls because of missing data caused by, say, cache or branch prediction issues, the execution resources of the processor can be reallocated for a different process that is ready to go, reducing processor downtime.
HTT manifests itself in the Windows 10 Task Manager by, for example, showing graphs for twice as many CPUs as the system has cores. These virtual CPUs are listed as logical processors (see Figure 1.28).
FIGURE 1.28 Logical processors in Windows
For an in-market example, compare the Intel i5 with the Intel i7. Similar models will have the same number of cores (say, four), but the i7 supports HTT, whereas the i5 does not. This gives the i7 a performance edge over its cousin. The i9 will be even one further step up from the i7. For everyday email and Internet use, the differences won't amount to much. But for someone who is using resource-intensive apps such as online gaming or virtual reality, the differences, especially between i5 and i9 processors, can be important.
Many of today's CPUs support virtualization in hardware, which eases the burden on the system that software-based virtualization imposes. For more information on virtualization, see Chapter 8, “Virtualization and Cloud Computing.” AMD calls its virtualization technology AMD-V (V for virtualization), whereas Intel calls theirs Virtualization Technology (VT). Most processors made today support virtual technology, but not all. Keep in mind that the BIOS/UEFI and operating system must support it as well for virtualization to work. You may need to manually enable the virtualization support in the BIOS/UEFI before it can be used. If you have an Intel processor and would like to check its support of VT, visit the following site to download the Intel Processor Identification Utility:
https://downloadcenter.intel.com/download/7838
As shown in Figure 1.30, the CPU Technologies tab of this utility tells you if your CPU supports Intel VT.
FIGURE 1.30 Intel Processor Identification Utility
“More memory, more memory, I don't have enough memory!” Adding memory is one of the most popular, easy, and inexpensive ways to upgrade a computer. As the computer's CPU works, it stores data and instructions in the computer's memory. Contrary to what you might expect from an inexpensive solution, memory upgrades tend to afford the greatest performance increase as well, up to a point. Motherboards have memory limits; operating systems have memory limits; CPUs have memory limits.
To identify memory visually within a computer, look for several thin rows of small circuit boards sitting vertically, potentially packed tightly together near the processor. In situations where only one memory stick is installed, it will be that stick and a few empty slots that are tightly packed together. Figure 1.31 shows where memory is located in a system—in this case, all four banks are full.
FIGURE 1.31 Location of memory within a system
There are a few technical terms and phrases that you need to understand with regard to memory and its function:
These terms are discussed in detail in the following sections.
Parity checking is a rudimentary error-checking scheme that offers no error correction. Parity checking works most often on a byte, or 8 bits, of data. A ninth bit is added at the transmitting end and removed at the receiving end so that it does not affect the actual data transmitted. If the receiving end does not agree with the parity that is set in a particular byte, a parity error results. The four most common parity schemes affecting this extra bit are known as even, odd, mark, and space. Even and odd parity are used in systems that actually compute parity. Mark (a term for a digital pulse, or 1 bit) and space (a term for the lack of a pulse, or a 0 bit) parity are used in systems that do not compute parity but expect to see a fixed bit value stored in the parity location. Systems that do not support or reserve the location required for the parity bit are said to implement non-parity memory.
The most basic model for implementing memory in a computer system uses eight memory chips to form a set. Each memory chip holds millions or billions of bits of information, each in its own cell. For every byte in memory, one bit is stored in each of the eight chips. A ninth chip is added to the set to support the parity bit in systems that require it. One or more of these sets, implemented as individual chips or as chips mounted on a memory module, form a memory bank.
A bank of memory is required for the computer system to recognize electrically that the minimum number of memory components or the proper number of additional memory components has been installed. The width of the system data bus, the external bus of the processor, dictates how many memory chips or modules are required to satisfy a bank. For example, one 32-bit, 72-pin SIMM (single in-line memory module) satisfies a bank for an old 32-bit CPU, such as a i386 or i486 processor. Two such modules are required to satisfy a bank for a 64-bit processor—a Pentium, for instance. However, only a single 64-bit, 168-pin DIMM is required to satisfy the same Pentium processor. For those modules that have fewer than eight or nine chips mounted on them, more than 1 bit for every byte is being handled by some of the chips. For example, if you see three chips mounted, the two larger chips customarily handle 4 bits, a nibble, from each byte stored, and the third, smaller chip handles the single parity bit for each byte.
Even and odd parity schemes operate on each byte in the set of memory chips. In each case, the number of bits set to a value of 1 is counted up. If there is an even number of 1 bits in the byte (0, 2, 4, 6, or 8), even parity stores a 0 in the ninth bit, the parity bit; otherwise, it stores a 1 to even up the count. Odd parity does just the opposite, storing a 1 in the parity bit to make an even number of 1s odd and a 0 to keep an odd number of 1s odd. You can see that this is effective only for determining if there was a blatant error in the set of bits received, but there is no indication as to where the error is and how to fix it. Furthermore, the total 1-bit count is not important, only whether it's even or odd. Therefore, in either the even or odd scheme, if an even number of bits is altered in the same byte during transmission, the error goes undetected because flipping 2, 4, 6, or all 8 bits results in an even number of 1s remaining even and an odd number of 1s remaining odd.
Mark and space parity are used in systems that want to see 9 bits for every byte transmitted but don't compute the parity bit's value based on the bits in the byte. Mark parity always uses a 1 in the parity bit, and space parity always uses a 0. These schemes offer less error detection capability than the even and odd schemes because only changes in the parity bit can be detected. Again, parity checking is not error correction; it's error detection only, and not the best form of error detection at that. Nevertheless, an error can lock up the entire system and display a memory parity error. Enough of these errors and you need to replace the memory. Therefore, parity checking remains from the early days of computing as an effective indicator of large-scale memory and data-transmission failure, such as with serial interfaces attached to analog modems or networking console interfaces, but not so much for detecting random errors.
In the early days of personal computing, almost all memory was parity-based. As quality has increased over the years, parity checking in the RAM subsystem has become more uncommon. As noted earlier, if parity checking is not supported, there will generally be fewer chips per module, usually one less per column of RAM.
The next step in the evolution of memory error detection is known as error-correction code (ECC). If memory supports ECC, check bits are generated and stored with the data. An algorithm is performed on the data and its check bits whenever the memory is accessed. If the result of the algorithm is all zeros, then the data is deemed valid and processing continues. ECC can detect single- and double-bit errors and actually correct single-bit errors. In other words, if a particular byte—group of 8 bits—contains errors in 2 of the 8 bits, ECC can recognize the error. If only 1 of the 8 bits is in error, ECC can correct the error.
Commonly speaking, the terms single-sided memory and double-sided memory refer to how some memory modules have chips on one side and others have chips on both sides. Double-sided memory is essentially treated by the system as two separate memory modules. Motherboards that support such memory have memory controllers that must switch between the two “sides” of the modules and, at any particular moment, can access only the side to which they have switched. Double-sided memory allows more memory to be inserted into a computer, using half the physical space of single-sided memory, which requires no switching by the memory controller.
Standard memory controllers manage access to memory in chunks of the same size as the system bus's data width. This is considered communicating over a single channel. Most modern processors have a 64-bit system data bus. This means that a standard memory controller can transfer exactly 64 bits of information at a time. Communicating over a single channel is a bottleneck in an environment where the CPU and memory can both operate faster than the conduit between them. Up to a point, every channel added in parallel between the CPU and RAM serves to ease this constriction.
Memory controllers that support dual-channel and greater memory implementation were developed in an effort to alleviate the bottleneck between the CPU and RAM. Dual-channel memory is the memory controller's coordination of two memory banks to work as a synchronized set during communication with the CPU, doubling the specified system bus width from the memory's perspective. Triple-channel memory, then, demands the coordination of three memory modules at a time. Quad-channel memory is the coordination of four memory modules at once. Collectively, they are known as multichannel memory implementations.
Because today's processors largely have 64-bit external data buses, and because one stick of memory satisfies this bus width, there is a 1:1 ratio between banks and modules. This means that implementing multichannel memory in today's most popular computer systems requires that two, three, or four memory modules be installed at a time. Note, however, that it's the motherboard, not the memory, that implements multichannel memory (more on this in a moment). Single-channel memory, in contrast, is the classic memory model that dictates only that a complete bank be satisfied whenever memory is initially installed or added. One bank supplies only half the width of the effective bus created by dual-channel support, for instance, which by definition pairs two banks at a time.
In almost all cases, multichannel implementations support single-channel installation, but poorer performance should be expected. Multichannel motherboards often include slots of different colors, usually one of each color per set of slots. To use only a single channel, you populate slots of the same color, skipping neighboring slots to do so. Filling neighboring slots in a dual-channel motherboard takes advantage of its dual-channel capability.
Because of the special tricks that are played with memory subsystems to improve overall system performance, care must be taken during the installation of disparate memory modules. In the worst case, the computer will cease to function when modules of different speeds, different capacities, or different numbers of sides are placed together in slots of the same channel. If all of these parameters are identical, there should be no problem with pairing modules. Nevertheless, problems could still occur when modules from two different manufacturers or certain unsupported manufacturers are installed, all other parameters being the same. Technical support or documentation from the manufacturer of your motherboard should be able to help with such issues.
Although it's not the make-up of the memory that leads to multichannel support but instead the technology on which the motherboard is based, some memory manufacturers still package and sell pairs and triplets of memory modules in an effort to give you peace of mind when you're buying memory for a system that implements multichannel memory architecture. Keep in mind, the motherboard memory slots have the distinctive color-coding, not the memory modules.
Memory comes in many formats. Each one has a particular set of features and characteristics, making it best suited for a particular application. Some decisions about the application of the memory type are based on suitability; others are based on affordability to consumers or marketability to computer manufacturers. The following list gives you an idea of the vast array of memory types and subtypes:
Pay particular attention to all synchronous DRAM types as that's the most common type in use. Note that the type of memory does not dictate the packaging of the memory. Conversely, however, you might notice one particular memory packaging holding the same type of memory every time you come across it. Nevertheless, there is no requirement to this end. Let's detail the intricacies of some of these memory types.
DRAM is dynamic random access memory. This is what most people are talking about when they mention RAM. When you expand the memory in a computer, you are adding DRAM chips. You use DRAM to expand the memory in the computer because it's a cheaper type of memory. Dynamic RAM chips are cheaper to manufacture than most other types because they are less complex. Dynamic refers to the memory chips' need for a constant update signal (also called a refresh signal) in order to keep the information that is written there. If this signal is not received every so often, the information will bleed off and cease to exist. Currently, the most popular implementations of DRAM are based on synchronous DRAM and include DDR3 and DDR4. Occasionally you will see some DDR2, and DDR5 is new so it hasn't been widely adopted yet. Before discussing these technologies, let's take a quick look at the legacy asynchronous memory types, none of which should appear on modern exams.
Asynchronous DRAM (ADRAM) is characterized by its independence from the CPU's external clock. Asynchronous DRAM chips have codes on them that end in a numerical value that is related to (often 1/10 of the actual value of) the access time of the memory. Access time is essentially the difference between the time when the information is requested from memory and the time when the data is returned. Common access times attributed to asynchronous DRAM were in the 40- to 120-nanosecond (ns) vicinity. A lower access time is obviously better for overall performance.
Because ADRAM is not synchronized to the front-side bus, you would often have to insert wait states through the BIOS setup for a faster CPU to be able to use the same memory as a slower CPU. These wait states represented intervals in which the CPU had to mark time and do nothing while waiting for the memory subsystem to become ready again for subsequent access.
Common asynchronous DRAM technologies included fast page mode (FPM), extended data out (EDO), and burst EDO (BEDO). Feel free to investigate the details of these particular technologies, but a thorough discussion of these memory types is not necessary here. The A+ technician should be concerned with synchronous forms of RAM, which are the only types of memory being installed in mainstream computer systems today.
Synchronous DRAM (SDRAM) shares a common clock signal with the computer's system-bus clock, which provides the common signal that all local-bus components use for each step that they perform. This characteristic ties SDRAM to the speed of the FSB and hence the processor, eliminating the need to configure the CPU to wait for the memory to catch up.
Originally, SDRAM was the term used to refer to the only form of synchronous DRAM on the market. As the technology progressed, and more was being done with each clock signal on the FSB, various forms of SDRAM were developed. What was once called simply SDRAM needed a new name retroactively. Today, we use the term single data rate SDRAM (SDR SDRAM) to refer to this original type of SDRAM.
SDR SDRAM SDR SDRAM is a legacy RAM technology, and it is presented here only to provide a basis for the upcoming discussion of DDR and other more advanced RAM. With SDR SDRAM, every time the system clock ticks, 1 bit of data can be transmitted per data pin, limiting the bit rate per pin of SDRAM to the corresponding numerical value of the clock's frequency. With today's processors interfacing with memory using a parallel data-bus width of 8 bytes (hence the term 64-bit processor), a 100 MHz clock signal produces 800 MBps. That's mega bytes per second, not mega bits. Such memory modules are referred to as PC100, named for the true FSB clock rate upon which they rely. PC100 was preceded by PC66 and succeeded by PC133, which used a 133 MHz clock to produce 1,066 MBps of throughput.
Note that throughput in megabytes per second is easily computed as eight times the rating in the name. This trick works for the more advanced forms of SDRAM as well. The common thread is the 8-byte system data bus. Incidentally, you can double throughput results when implementing dual-channel memory.
Because the actual system clock speed is rarely mentioned in marketing literature, on packaging, or on store shelves for DDR and higher, you can use this advertised FSB frequency in your computations for DDR throughput. For example, with a 100 MHz clock and two operations per cycle, motherboard makers will market their boards as having an FSB of 200 MHz. Multiplying this effective rate by 8 bytes transferred per cycle, the data rate is 1,600 MBps. Because DDR made throughput a bit trickier to compute, the industry began using this final throughput figure to name the memory modules instead of the actual frequency, which was used when naming SDR modules. This makes the result seem many times better (and much more marketable), while it's really only twice (or so) as good, or close to it.
In this example, the module is referred to as PC1600, based on a throughput of 1,600 MBps. The chips that go into making PC1600 modules are named DDR200 for the effective FSB frequency of 200 MHz. Stated differently, the industry uses DDR200 memory chips to manufacture PC1600 memory modules.
Let's make sure that you grasp the relationship between the speed of the FSB and the name for the related chips as well as the relationship between the name of the chips (or the speed of the FSB) and the name of the modules. Consider an FSB of 400 MHz, meaning an actual clock signal of 200 MHz, by the way—the FSB is double the actual clock for DDR, remember. It should be clear that this motherboard requires modules populated with DDR400 chips and that you'll find such modules marketed and sold as PC3200.
Let's try another. What do you need for a motherboard that features a 333 MHz FSB (actual clock is 166 MHz)? Well, just using the 8:1 rule mentioned earlier, you might be on the lookout for a PC2667 module. Note, however, that sometimes the numbers have to be played with a bit to come up with the industry's marketing terms. You'll have an easier time finding PC2700 modules that are designed specifically for a motherboard like yours, with an FSB of 333 MHz. The label isn't always technically accurate, but round numbers sell better, perhaps. The important concept here is that if you find PC2700 modules and PC2667 modules, there's absolutely no difference; they both have a 2667 MBps throughput rate. Go for the best deal; just make sure that the memory manufacturer is reputable.
DDR2 SDRAM Think of the 2 in DDR2 as yet another multiplier of 2 in the SDRAM technology, using a lower peak voltage to keep power consumption down (1.8V vs. the 2.5V of DDR). Still double-pumping, DDR2, like DDR, uses both sweeps of the clock signal for data transfer. Internally, DDR2 further splits each clock pulse in two, doubling the number of operations it can perform per FSB clock cycle. Through enhancements in the electrical interface and buffers, as well as through adding off-chip drivers, DDR2 nominally produces four times the throughput that SDR is capable of producing.
Continuing the DDR example, DDR2, using a 100 MHz actual clock, transfers data in four operations per cycle (effective 400 MHz FSB) and still 8 bytes per operation, for a total of 3,200 MBps. Just as with DDR, chips for DDR2 are named based on the perceived frequency. In this case, you would be using DDR2-400 chips. DDR2 carries on the effective FSB frequency method for naming modules but cannot simply call them PC3200 modules because those already exist in the DDR world. DDR2 calls these modules PC2-3200. (Note the dash to keep the numeric components separate.)
As another example, it should make sense that PC2-5300 modules are populated with DDR2-667 chips. Recall that you might have to play with the numbers a bit. If you multiply the well-known FSB speed of 667 MHz by 8 to figure out what modules you need, you might go searching for PC2-5333 modules. You might find someone advertising such modules, but most compatible modules will be labeled PC2-5300 for the same marketability mentioned earlier. They both support 5,333 MBps of throughput.
DDR3 SDRAM The next generation of memory devices was designed to roughly double the performance of DDR2 products. Based on the functionality and characteristics of DDR2's proposed successor, most informed consumers and some members of the industry surely assumed the forthcoming name would be DDR4. This was not to be, however, and DDR3 was born. This naming convention proved that the 2 in DDR2 was not meant to be a multiplier but instead a revision mark of sorts. Well, if DDR2 was the second version of DDR, then DDR3 is the third. DDR3 is a memory type, designed to be twice as fast as the DDR2 memory, that operates with the same system clock speed. Just as DDR2 was required to lower power consumption to make up for higher frequencies, DDR3 must do the same. In fact, the peak voltage for DDR3 is only 1.5V.
The most commonly found range of actual clock speeds for DDR3 tends to be from 133 MHz at the low end to less than 300 MHz. Because double-pumping continues with DDR3, and because four operations occur at each wave crest (eight operations per cycle), this frequency range translates to common FSB implementations from 1,066 MHz to more than 2,000 MHz in DDR3 systems. These memory devices are named following the conventions established earlier. Therefore, if you buy a motherboard with a 1,600 MHz FSB, you know immediately that you need a memory module populated with DDR3-1600 chips, because the chips are always named for the FSB speed. Using the 8:1 module-to-chip/FSB naming rule, the modules that you need would be called PC3-12800, supporting a 12,800 MBps throughput.
The earliest DDR3 chips, however, were based on a 100 MHz actual clock signal, so we can build on our earlier example, which was also based on an actual clock rate of 100 MHz. With eight operations per cycle, the FSB on DDR3 motherboards is rated at 800 MHz, quite a lot of efficiency while still not needing to change the original clock with which our examples began. Applying the 8:1 rule again, the resulting RAM modules for this motherboard are called PC3-6400 and support a throughput of 6,400 MBps, carrying chips called DDR3-800, again named for the FSB speed.
DDR5 SDRAM After a long wait, DDR5 finally hit the market at the end of 2021. Intel's Alder Lake platform was the first to support it; AMD chips could support DDR5 in early 2022 with the Zen 4 release.
DDR5 doubles the speed of DDR4 to 6.4 Gbps, as is expected for a new memory standard. Improved power efficiency means it runs at 1.1 volts. DDR5 is also the first memory module to be available in up to 128 GB modules.
Static random access memory (SRAM) doesn't require a refresh signal like DRAM does. The chips are more complex and are thus more expensive. However, they are considerably faster. DRAM access times come in at 40 nanoseconds (ns) or more; SRAM has access times faster than 10ns. SRAM is classically used for cache memory.
ROM stands for read-only memory. It is called read-only because you could not write to the original form of this memory. Once information had been etched on a silicon chip and manufactured into the ROM package, the information couldn't be changed. Some form of ROM is normally used to store the computer's BIOS because this information normally does not change often.
The system ROM in the original IBM PC contained the power-on self-test (POST), BIOS, and cassette BASIC. Later, IBM computers and compatibles included everything but the cassette BASIC. The system ROM enables the computer to “pull itself up by its bootstraps,” or boot (find and start the operating system).
Through the years, different forms of ROM were developed that could be altered, later ones more easily than earlier ones. The first generation was the programmable ROM (PROM), which could be written to for the first time in the field using a special programming device, but then no more. You may liken this to the burning of a DVD-R.
The erasable PROM (EPROM) followed the PROM, and it could be erased using ultraviolet light and subsequently reprogrammed using the original programming device. These days, flash memory is a form of electronically erasable PROM (EEPROM). Of course, it does not require UV light to erase its contents, but rather a slightly higher than normal electrical pulse.
The memory slots on a motherboard are designed for particular module form factors or styles. RAM historically evolved from form factors no longer seen for such applications, such as dual in-line package (DIP), single in-line memory module (SIMM), and single in-line pin package (SIPP). The most popular form factors for primary memory modules today are as follows:
Desktop computers will use DIMMs. Laptops and smaller devices require SODIMMs or smaller memory packaging. So, in addition to coordinating the speed of the components, their form factor is an issue that must be addressed.
One type of memory package is known as a DIMM, which stands for dual in-line memory module. DIMMs are 64-bit memory modules that are used as a package for the SDRAM family: SDR, DDR, DDR2, DDR3, DDR4, and DDR5. The term dual refers to the fact that, unlike their SIMM predecessors, DIMMs differentiate the functionality of the pins on one side of the module from the corresponding pins on the other side. With 84 pins per side, this makes 168 independent pins on each standard SDR module, as shown with its two keying notches as well as the last pin labeled 84 on the right side in Figure 1.33. SDR SDRAM modules are no longer part of the CompTIA A+ objectives, and they are mentioned here as a foundation only.
FIGURE 1.33 An SDR dual in-line memory module (DIMM)
The DIMM used for DDR memory has a total of 184 pins and a single keying notch, whereas the DIMM used for DDR2 has a total of 240 pins, one keying notch, and possibly an aluminum cover for both sides, called a heat spreader and designed like a heat sink to dissipate heat away from the memory chips and prevent overheating. The DDR3 DIMM is similar to that of DDR2. It has 240 pins and a single keying notch, but the notch is in a different location to avoid cross-insertion. Not only is the DDR3 DIMM physically incompatible with DDR2 DIMM slots, it's also electrically incompatible. A DDR4 DIMM is the same length as a DDR3 DIMM, but is about 0.9mm taller and has 288 pins. The key is in a different spot, so you can't put DDR4 memory into a DDR2 or DDR3 slot. Finally, DDR5 has 288 pins as DDR4 does but is keyed differently so that DDR4 modules won't fit into DDR5 slots, and vice versa. Table 1.3 summarizes some key differences between the types of DDR we've introduced in this chapter.
Characteristic | DDR | DDR2 | DDR3 | DDR4 | DDR5 |
---|---|---|---|---|---|
Pins | 184 | 240 | 240 | 288 | 288 |
Max memory | 1 GB | 8 GB | 32 GB | 64 GB | 128 GB |
Channels | 1 | 1 | 1 | 1 | 2 |
Voltage | 2.5 v | 1.8 v | 1.5 v | 1.2 v | 1.1 v |
TABLE 1.3 DDR characteristics
Figure 1.34 shows, from top to bottom, DDR4, DDR3, and DDR2 DIMMs.
FIGURE 1.34 DDR4, DDR3, and DDR2 DIMMs
Laptop computers and other computers that require much smaller components don't use standard RAM packages, such as DIMMs. Instead, they call for a much smaller memory form factor, such as a small outline DIMM (SODIMM). SODIMMs are available in many physical implementations, including the older 32-bit (72- and 100-pin) configuration and newer 64-bit (144-pin SDR SDRAM, 200-pin DDR/DDR2, 204-pin DDR3, 260-pin DDR4, and 262-pin DDR5) configurations.
All 64-bit modules have a single keying notch. The 144-pin module's notch is slightly off center. Note that although the 200-pin SODIMMs for DDR and DDR2 have slightly different keying, it's not so different that you don't need to pay close attention to differentiate the two. They are not, however, interchangeable. DDR3, DDR4, and DDR5 are keyed differently from the others as well. Figure 1.34 shows a DDR3 SODIMM compared to DDR3 and DDR2 DIMMs.
FIGURE 1.35 DDR3 SODIMM vs. DDR3 and DDR2 DIMMs
It's a basic concept of physics: electronic components turn electricity into work and heat. The excess heat must be dissipated or it will shorten the life of the components. In some cases (like with the CPU), the component will produce so much heat that it can destroy itself in a matter of seconds if there is not some way to remove this extra heat.
Air-cooling methods are used to cool the internal components of most PCs. With air cooling, the movement of air removes the heat from the component. Sometimes, large blocks of metal called heat sinks are attached to a heat-producing component in order to dissipate the heat more rapidly.
When you turn on a computer, you will often hear lots of whirring. Contrary to popular opinion, the majority of the noise isn't coming from the hard disk (unless it's about to go bad). Most of this noise is coming from the various fans inside the computer. Fans provide airflow within the computer.
Most PCs have a combination of the following seven fans:
Ideally, the airflow inside a computer should resemble what is shown in Figure 1.39, where the back of the chassis is shown on the left in the image.
FIGURE 1.39 System unit airflow
Note that you must pay attention to the orientation of the power supply's airflow. If the power supply fan is an exhaust fan, as assumed in this discussion, the front and rear fans will match their earlier descriptions: front, intake; rear, exhaust. If you run across a power supply that has an intake fan, the orientation of the supplemental chassis fans should be reversed as well. The rear chassis fan(s) should always be installed in the same orientation the power supply fan runs to avoid creating a small airflow circuit that circumvents the cross-flow of air throughout the case. The front chassis fan and the rear fans should always be installed in reverse orientation to avoid having them fight against each other and thereby reduce the internal airflow. Reversing supplemental chassis fans is usually no harder than removing four screws and flipping the fan. Sometimes, the fan might just snap out, flip, and then snap back in, depending on the way it is rigged up.
If you are going to start overclocking your computer, you will want to do everything in your power to cool all of its components, and that includes the memory.
There are two methods of cooling memory: passive and active. The passive memory cooling method just uses the ambient case airflow to cool the memory through the use of enhanced heat dissipation. For this, you can buy either heat sinks or, as mentioned earlier, special “for memory chips only” devices known as heat spreaders. Recall that these are special aluminum or copper housings that wrap around memory chips and conduct the heat away from them.
Active cooling, on the other hand, usually involves forcing some kind of cooling medium (air or water) around the RAM chips themselves or around their heat sinks. Most often, active cooling methods are just high-speed fans directing air right over a set of heat spreaders.
You might be thinking, “Hey, my hard drive is doing work all the time. Is there anything I can do to cool it off?” There are both active and passive cooling devices for hard drives. Most common, however, is the active cooling bay. You install a hard drive in a special device that fits into a 5¼″ expansion bay. This device contains fans that draw in cool air over the hard drive, thus cooling it. Figure 1.40 shows an example of one of these active hard drive coolers. As you might suspect, you can also get heat sinks for hard drives.
FIGURE 1.40 An active hard disk cooler
Every motherboard has a chip or chipset that controls how the computer operates. Like other chips in the computer, the chipset is normally cooled by the ambient air movement in the case. However, when you overclock a computer, the chipset may need to be cooled more because it is working harder than it normally would be. Therefore, it is often desirable to replace the onboard chipset cooler with a more efficient one. Refer back to Figure 1.4 for a look at a modern chipset cooling solution.
Probably the greatest challenge in cooling is the computer's CPU. It is the component that generates the most heat in a computer (aside from some pretty insane GPUs out there). As a matter of fact, if a modern processor isn't actively cooled all of the time, it will generate enough heat to burn itself up in seconds. That's why most motherboards have an internal CPU heat sensor and a CPU_FAN sensor. If no cooling fan is active, these devices will shut down the computer before damage occurs.
There are multiple CPU cooling methods, but the two most common are air cooling and liquid cooling.
The parts inside most computers are cooled by air moving through the case. The CPU is no exception. However, because of the large amount of heat produced, the CPU must have (proportionately) the largest surface area exposed to the moving air in the case. Therefore, the heat sinks on the CPU are the largest of any inside the computer.
The CPU fan often blows air down through the body of the heat sink to force the heat into the ambient internal air where it can join the airflow circuit for removal from the case. However, in some cases, you might find that the heat sink extends up farther, using radiator-type fins, and the fan is placed at a right angle and to the side of the heat sink. This design moves the heat away from the heat sink immediately instead of pushing the air down through the heat sink. CPU fans can be purchased that have an adjustable rheostat to allow you to dial in as little airflow as you need, aiding in noise reduction but potentially leading to accidental overheating.
It should be noted that the highest-performing CPU coolers use copper plates in direct contact with the CPU. They also use high-speed and high-CFM cooling fans to dissipate the heat produced by the processor. CFM is short for cubic feet per minute, an airflow measurement of the volume of air that passes by a stationary object per minute. Figure 1.41 shows a newer, large heat sink with a fan in the center. In the picture it can be tough to gauge size—this unit is about six inches across! (And to be fair, this heat sink should have a second fan on one of the sides, but with the second fan the heatsink wouldn't fit into the author's case—the RAM was in the way.)
FIGURE 1.41 Large heat sink and fan
Most new CPU heat sinks use tubing to transfer heat away from the CPU. With any cooling system, the more surface area exposed to the cooling method, the better the cooling. Plus, the heat pipes can be used to transfer heat to a location away from the heat source before cooling. This is especially useful in cases where the form factor is small and with laptops, where open space is limited.
With advanced heat sinks and CPU cooling methods like this, it is important to improve the thermal transfer efficiency as much as possible. To that end, cooling engineers came up with a glue-like compound that helps to bridge the extremely small gaps between the CPU and the heat sink, which avoids superheated pockets of air that can lead to focal damage of the CPU. This product is known as thermal transfer compound, or simply thermal compound (alternatively, thermal grease or thermal paste), and it can be bought in small tubes. Single-use tubes are also available and alleviate the guesswork involved with how much you should apply. Watch out, though; this stuff makes quite a mess and doesn't want to come off your fingers very easily. An alternative to the paste is a small thermal pad, which provides heat conductivity between the processor and the heat sink.
Apply the compound by placing a bead in the center of the heat sink, not on the CPU, because some heat sinks don't cover the entire CPU package. That might sound like a problem, but some CPUs don't have heat-producing components all the way out to the edges. Some CPUs even have a raised area directly over the silicon die within the packaging, resulting in a smaller contact area between the components. You should apply less than you think you need because the pressure of attaching the heat sink to the CPU will spread the compound across the entire surface in a very thin layer. It's advisable to use a clean, lint-free applicator of your choosing to spread the compound around a bit as well, just to get the spreading started. You don't need to concern yourself with spreading it too thoroughly or too neatly because the pressure applied during attachment will equalize the compound quite well. During attachment, watch for oozing compound around the edges, clean it off immediately, and use less next time.
If you've ever installed a brand-new heat sink onto a CPU, you've most likely used thermal compound or the thermal compound patch that was already applied to the heat sink for you. If your new heat sink has a patch of thermal compound pre-applied, don't add more. If you ever remove the heat sink, don't try to reuse the patch or any other form of thermal compound. Clean it all off and start fresh.
Liquid cooling is a technology whereby a special water block is used to conduct heat away from the processor (as well as from the chipset). Water is circulated through this block to a radiator, where it is cooled.
The theory is that you could achieve better cooling performance through the use of liquid cooling. For the most part, this is true. However, with traditional cooling methods (which use air and water), the lowest temperature you can achieve is room temperature. Plus, with liquid cooling, the pump is submerged in the coolant (generally speaking), so as it works, it produces heat, which adds to the overall liquid temperature.
The main benefit to liquid cooling is silence. Only one fan is needed: the fan on the radiator to cool the water. So, a liquid-cooled system can run extremely quietly.
There are two major classifications of liquid cooling systems in use with PCs today: all-in-one (AIO) coolers and custom loop systems. AIO systems are relatively easy to install—they require about as much effort as a heat sink and fan—and comparably priced to similarly effective air systems. Figure 1.42 shows an example from Corsair, with the pump in front and the fans behind it, attached to the radiator.
FIGURE 1.42 AIO liquid cooling system
AIO systems come in three common sizes: 120 mm (with one fan, and the most common), 240 mm (two fans, for overclocked components), and 360 mm (three fans, for high-end multicore overclocked components). Options with RGB lighting are readily available if that's the style you want. Custom loop systems can quickly become complex and expensive, but many hardcore gamers swear by their performance. The components are essentially the same as those in an AIO system—there's a radiator, pump, fans, some tubes, and liquid. However, each part is purchased separately, and some assembly is required.
In this chapter, we took a tour of the key internal system components of a PC. You learned primarily about the “big three,” which are the motherboard, processor, and memory. Included in the motherboard discussion were form factors, connector types, such as PCIe, SATA, M.2, and headers, BIOS/UEFI settings, encryption, and the CMOS battery. CPU topics included features such as compatibility, architecture, multithreading, and virtualization. With RAM you learned about different types (SODIMMs, DDR2, DDR3, DDR4, and DDR5) and concepts such as single-, dual-, triple-, and quad-channel, error correction, and parity.
Finally, the chapter ended with cooling systems, which keep the components from damaging themselves with excess heat. This chapter laid the foundation for the rest of the book, including the next few chapters on additional hardware components.
Know three form factors of system boards. Know the characteristics of and differences between ATX and ITX motherboards.
Know the components of a motherboard. Be able to describe, identify, and replace (where applicable) motherboard components, such as chipsets, expansion slots, memory slots, processor sockets, BIOS/UEFI (firmware), and CMOS batteries.
Be able to identify and differentiate motherboard connector types. Understand the differences between PCI, PCIe, SATA, eSATA, and M.2 connectors, as well as power connectors and headers.
Understand core concepts of motherboard compatibility. Know that Intel and AMD chips use different sockets and therefore are incompatible with each other. Also know differences between server, multisocket, desktop, and laptop motherboards.
Understand CPU architecture. Know the differences between x64, x86, and ARM processors, implications of single-core versus multicore CPUs, and multithreading and virtualization support.
Know what the BIOS/UEFI is responsible for. The BIOS/UEFI controls boot options, fan speeds, USB permissions, and security options such as the boot password, TPM, HSM, and Secure Boot.
Understand the purposes and characteristics of memory. Know about the characteristics that set the various types of memory apart from one another. This includes the actual types of memory, such as DRAM (which includes several varieties), SRAM, ROM, and CMOS, as well as memory packaging, such as DIMMs and SODIMMs. Also have a firm understanding of the different levels of cache memory as well as its purpose in general.
Know how to replace RAM, given a scenario. RAM must match the motherboard in form factor and in speed. For example, some motherboards will only accept DDR4 or DDR5 memory. The speed should be compatible with the motherboard, as indicated in the documentation.
Understand the purposes and characteristics of cooling systems. Know the different ways internal components can be cooled and how overheating can be prevented.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answer compares to the authors', refer to Appendix B.
You have been asked to remove a dual in-line memory module and insert one with a larger capacity in its place. Describe the process for doing so.
Identify the component each arrow points to in the following image of an ATX motherboard.
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
As a PC technician, you need to know quite a bit about hardware. Given the importance and magnitude of this knowledge, the best way to approach learning about it is in sections. The first chapter introduced the topic via the primary core components, and this chapter follows up where it left off. First, we will look at adding functionality by plugging expansion cards into the motherboard. Then, we will focus on storage devices that hold data persistently—that is, they don't require power to maintain data like RAM does. Finally, we will end the chapter by looking at the simple-looking but potentially dangerous box that gives the components the energy they need—the power supply.
An expansion card (also known as an adapter card) is simply a circuit board that you install into a computer to increase the capabilities of that computer. Expansion cards come in varying formats for different uses, but the important thing to note is that no matter what function a card has, the card being installed must match the bus type of the motherboard into which it is being installed. For example, you can install a PCIe network card into a PCIe expansion slot only.
For today's integrated components (those built into the motherboard), you might not need an adapter to achieve the related services, but you will still need to install a driver—a software program that lets the operating system talk to the hardware—to make the integrated devices function with the operating system. Most motherboard manufacturers supply drivers with their motherboards, typically on a flash drive, that contain all the device drivers needed to get the built-in electronics recognized by the operating system. Execution of the driver's setup program generally results in all components working properly.
The following are the four most common categories of expansion cards installed today:
Let's take a quick look at each of these card types, their functions, and what some of them look like.
A video card (sometimes called a graphics card) is the expansion card that you put into a computer to allow the computer to present information on some kind of display, typically a monitor or a projector. A video card is also responsible for converting the data sent to it by the CPU into the pixels, addresses, and other items required for display. Sometimes, video cards can include dedicated chips to perform some of these functions, thus accelerating the speed of display.
You will encounter two classes of video cards: onboard cards and add-on cards. Onboard (or integrated) cards are built into the motherboard. As mentioned earlier, you need to install a device driver to get them to work properly, but those often come packaged with the motherboard itself. The upside to an integrated card is that it frees up an expansion slot. The manufacturer can either leave the slot open or design the motherboard and/or case to be smaller. One downside is that if the video card fails, you need a new motherboard, or you can install an add-on card. A second downside is that the onboard video cards aren't typically high-end. Onboard cards generally share system memory with the processor, which limits the quality of graphics one can produce. If the user wants great graphics from a powerful video card, then an add-on card is almost always the way to go. For example, serious gamers will always insist on a separate video card.
As for add-on cards, PCIe is the preferred expansion slot type. You might be able to find the rare, outdated motherboard that still offers a legacy AGP slot, and you might see some cheap PCI video cards, but they are uncommon. The technology on which PCIe was designed performs better for video than those on which AGP and PCI are based. Figure 2.1 shows an example of a PCIe x16 video card. The video card pictured is 10.6" (270 mm) long and takes up quite a bit of space inside the case. Most cards today have built-in fans like this one does to reduce the chance of overheating.
FIGURE 2.1 A PCIe video expansion card
There is an extensive range of video cards available today on the market. For everyday usage, cards with 1–2 GB of video memory are inexpensive and will do the trick. For gamers, high-end cards with a minimum of 8 GB GDDR5 are recommended. Of course, over the lifespan of this book, that number is sure to increase. (As of this writing, cards with 24 GB GDDR6 are available.) High-end video cards can easily cost several thousand dollars.
The main two standards for video cards are the NVIDIA GeForce series and the AMD Radeon (formerly ATI Radeon) line. Gamers will debate the pros and cons of each platform but know that you can get a range of performance, from good to phenomenal, from either one. When looking for a card, know how much memory is wanted or needed and how many and which types of video ports (such as HDMI or DisplayPort) are available. We will talk more about the pros and cons of several video connectors in Chapter 3, “Peripherals, Cables, and Connectors.”
The most basic and prolific multimedia adapter is the sound card. Video capture cards also offer multimedia experiences but are less common than sound cards.
Just as there are devices to convert computer signals into printouts and video information, there are devices to convert those signals into sound. These devices are known as sound cards. Although sound cards started out as pluggable adapters, this functionality is one of the most common integrated technologies found on motherboards today. A sound card typically has small, round 1/8 jacks on the back of it for connecting microphones, headphones, and speakers as well as other sound equipment. Older sound cards used a DA15 game port, which could be used for either joysticks or Musical Instrument Digital Interface (MIDI) controllers. Figure 2.2 shows an example of a sound card with a DA15 game port.
FIGURE 2.2 A classic sound card
In our section on video cards, we noted that integrated cards have inferior performance to add-on ones, and though the same holds true for sound cards, the difference isn't quite as drastic. Many of today's motherboards come equipped with 5.1 or 7.1 analog or digital audio and support other surround sound formats as well. For everyday users and even many gamers, integrated audio is fine.
For users who need extra juice, such as those who produce movies or videos or do other audio/video (A/V) editing, a specialized add-on sound card is a must. Very good quality sound cards can be found for under $100, compared with cheaper models around $20—there isn't a huge difference in price as there is with CPUs and video cards. Look for a card with a higher sampling rate (measured in kilohertz [kHz]) and higher signal-to-noise ratio (measured in decibels [dB]). The de facto standard for sound cards is the Sound Blaster brand. Although other brands exist, they will often tout “Sound Blaster compatibility” in their advertising to show that they are legit.
In addition to audio output, many A/V editors will require the ability to input custom music from an electronic musical keyboard or other device. A term you will hear in relation to this is the MIDI standard. As noted earlier, old sound cards would sometimes have a round 5-pin MIDI port, which was used to connect the musical keyboard or other instrument to the computer. Today, digital musical instrument connections are often made via USB. Nonetheless, you will still see the term MIDI compatible used with a lot of digital musical devices.
A video capture card is a stand-alone add-on card often used to save a video stream to the computer for later manipulation or sharing. This can be video from an Internet site, or video from an external device such as a digital camera or smartphone. Video-sharing sites on the Internet make video capture cards quite popular with enterprises and Internet users alike. Video capture cards need and often come with software to aid in the processing of multimedia input. While video and sound cards are internal expansion devices, capture cards can be internal (PCIe) or external (USB).
Not all video capture cards record audio signals while processing video signals. If this feature is important to you or the end user, be sure to confirm that the card supports it. Also know that capture cards work with standard video resolutions, and specific cards might be limited on the resolutions they support. Double-check the specifications to make sure the card will meet the need, and also make sure to get reviews of the software used with the device.
A network interface card (NIC) is an expansion card that connects a computer to a network so that it can communicate with other computers on that network. It translates the data from the parallel data stream used inside the computer into the serial data stream that makes up the frames used on the network. Internal cards have a connector for the type of expansion bus on the motherboard (PCIe or PCI) and external cards typically use USB. In addition to physically installing the NIC, you need to install drivers for the NIC in order for the computer to use the adapter to access the network. Figure 2.3 shows a PCIe x1 Ethernet NIC with an RJ-45 port. (Network connectors are covered in more detail in Chapter 3.)
FIGURE 2.3 A network interface card
You will see two different types of network cards: wired and wireless. A wired card has an interface for the type of network it is connecting to (such as fiber connectors, Registered Jack 45 [RJ-45] for unshielded twisted pair [UTP], antenna for wireless, or BNC for legacy coax). Wireless cards of course don't need to use wires, so they won't necessarily have one of these ports. Some do, just for compatibility or desperate necessity.
Wireless NICs have the unique characteristic of requiring that you configure their connecting device before configuring the NIC. Wired NICs can generally create a link and begin operation just by being physically connected to a hub or switch. The wireless access point or ad hoc partner computer must also be configured before secure communication, at a minimum, can occur by using a wireless NIC. These terms are explained in greater detail in Chapter 7, “Wireless and SOHO Networks.” Figure 2.4 shows a PCI wireless NIC for a desktop computer. On the back of it (the left side of the picture) is the wireless antenna.
FIGURE 2.4 A wireless NIC
An input/output card is often used as a catchall phrase for any expansion card that enhances the system, allowing it to interface with devices that offer input to the system, output from the system, or both. The following are common examples of modern I/O cards:
Figure 2.6 shows a 7-port PCIe x1 USB expansion card (left) next to an eSATA card (right). USB cards commonly come in 2-, 4-, and 7-port configurations, whereas eSATA cards often have one or two external ports. This eSATA card also has two internal SATA connectors on the top (left, in the picture) of the card. You'll also find cards that have multiple port types, such as eSATA and USB.
FIGURE 2.6 USB and eSATA expansion cards
These cards are to be installed in a compatible slot on the motherboard. Their configuration is minimal, and it is usually completed through the operating system's Plug and Play (PnP) process. Nevertheless, check the BIOS settings after installation for new entries in the menu structure. It's the job of the BIOS to track all the hardware in the system and supply resources as needed. For example, a new Thunderbolt expansion card might allow you to configure whether attached Thunderbolt devices should be allowed to wake the system, how long a delay should be observed before waking the system, and various settings for how to use memory and other resources.
Expansion cards might require configuration. However, most can be recognized automatically by a PnP operating system. In other words, resources are handed out automatically without jumper settings, or the installation of device drivers is handled or requested automatically. Supplying the drivers might be the only form of configuration required.
Some adapters, however, require more specific configuration steps during installation. For example:
In general, installation and configuration steps for expansion cards can be summarized as follows:
Connect power, if needed.
This most often applies to video cards.
After booting up the computer, install the driver.
Again, Plug and Play may take care of this automatically for you.
In any event, consult the documentation provided with your adapter or the manufacturer's website for additional configuration requirements or options. The more specialized the adapter, the more likely it will come with specialty-configuration utilities.
What good is a computer without a place to put everything? Storage media hold the files that the operating system needs to operate and the data that users need to save. What about saving to the cloud? The computers that make up the cloud, rather than the local computer, hold the storage media. The many different types of storage media differ in terms of their capacity (how much they can store), access time (how fast the computer can access the information), and the physical type of media used.
Hard disk drive (HDD) systems (or hard drives for short) are used for permanent storage and quick access. Hard drives typically reside inside the computer, where they are semi-permanently mounted with no external access (although there are external and removable hard drives) and can hold more information than other forms of storage. Hard drives use a magnetic storage medium, and they are known as conventional drives to differentiate them from newer solid-state storage media.
The hard disk drive system contains the following three critical components:
Figure 2.7 shows a hard disk drive and host adapter. The hard drive controller is integrated into the drive in this case, but it could reside on the host adapter in other hard drive technologies. This particular example shows a hard drive plugging into an expansion card. Today's drives almost always connect straight to the motherboard, again with the HBA being integrated with the drive itself.
FIGURE 2.7 A hard disk drive system
Hard drives, regardless of whether they are magnetic or solid-state, most often connect to the motherboard's SATA or Parallel Advanced Technology Attachment (PATA) interfaces. You learned about SATA and PATA in Chapter 1, but Figure 2.8 provides a reminder of what the interfaces look like; SATA is on the top.
The back of the hard drive will have data and power connectors. Figure 2.9 shows the data and power connectors for a PATA drive and a SATA drive.
FIGURE 2.8 Four SATA and two PATA ports
Today, IDE (PATA) hard drives are essentially obsolete. Most of that is due to the limitations in transfer speeds. Most PATA hard drives follow the ATA/100 standard, which has a maximum transfer speed of 100 MBps. There are faster ATA standards, such as ATA/133 and ATA/167, but drives using those standards are rare. SATA III (also known as SATA 6 Gb/s), on the other hand, has a maximum transfer speed of 600 MBps.
FIGURE 2.9 PATA (top) and SATA (bottom) hard drive data and power connectors
A hard drive is constructed in a cleanroom to avoid the introduction of contaminants into the hermetically sealed drive casing. Once the casing is sealed, most manufacturers seal one or more of the screws with a sticker warning that removal of or damage to the seal will result in voiding the drive's warranty. Even some of the smallest contaminants can damage the precision components if allowed inside the hard drive's external shell. The following is a list of the terms used to describe these components in the following paragraphs:
Inside the sealed case of the hard drive lie one or more platters, where the actual data is stored by the read/write heads. The heads are mounted on a mechanism that moves them in tandem across both surfaces of all platters. Older drives used a stepper motor to position the heads at discrete points along the surface of the platters, which spin at thousands of revolutions per minute on a spindle mounted to a hub. Newer drives use voice coils for a more analog movement, resulting in reduced data loss because the circuitry can sense where the data is located through a servo scheme, even if the data shifts due to changes in physical disc geometry. Figure 2.10 shows the internal components of a conventional hard drive.
FIGURE 2.10 Anatomy of a hard drive
By Eric Gaba, Wikimedia Commons user Sting, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=11278668
Before a hard drive can store data, it must be prepared. Factory preparation for newer drives, or low-level formatting in the field for legacy drives, maps the inherent flaws of the platters so that the drive controllers know not to place data in these compromised locations. Additionally, this phase in drive preparation creates concentric rings, or tracks, which are drawn magnetically around the surface of the platters. Sectors are then delineated within each of the tracks. Sectors are the magnetic domains that represent the smallest units of storage on the disk's platters. This is illustrated in Figure 2.11. Magnetic-drive sectors commonly store only 512 bytes (½ KB) of data each.
FIGURE 2.11 Cylinders, tracks, and sectors
The capacity of a hard drive is a function of the number of sectors it contains. The controller for the hard drive knows exactly how the sectors are laid out within the disk assembly. It takes direction from the BIOS when writing information to and reading information from the drive. The BIOS, however, does not always understand the actual geometry of the drive. For example, the BIOS does not support more than 63 sectors per track. Nevertheless, almost all hard drives today have tracks that contain many more than 63 sectors per track. As a result, a translation must occur from where the BIOS believes it is directing information to be written to where the information is actually written by the controller. When the BIOS detects the geometry of the drive, it is because the controller reports dimensions that the BIOS can understand. The same sort of trickery occurs when the BIOS reports to the operating system a linear address space for the operating system to use when requesting that data be written to or read from the drive through the BIOS.
After initial drive preparation, the drive is formatted with a file system, by the operating system, and then it's ready to store data. Filesystems laid down on the tracks and their sectors routinely group a configurable number of sectors into equal or larger sets called clusters or allocation units. This concept exists because operating system designers have to settle on a finite number of addressable units of storage and a fixed number of bits to address them uniquely.
No two files are allowed to occupy the same sector, so the opportunity exists for a waste of space if small files occupy only part of a sector. Clusters exacerbate the problem by having a similar foible: the operating system does not allow any two files to occupy the same cluster. Thus, the larger the cluster size, the larger the potential waste. So although you can increase the cluster size (generally to as large as 64 KB, which corresponds to 128 sectors), you should keep in mind that unless you are storing a notable number of very large files, the waste will escalate astoundingly, perhaps negating or reversing your perceived storage-capacity increase.
As the electronics within the HBA and controller get faster, they are capable of requesting data at higher and higher rates. If the platters are spinning at a constant rate, however, the information can be accessed only as fast as a given fixed rate. To make information available to the electronics more quickly, manufacturers increase the speed at which the platters spin from one generation of drives to the next, with multiple speeds coexisting in the marketplace for an unpredictable period, at least until the demand dies down for one or more speeds.
The following spin rates have been used in the industry for the platters in conventional magnetic hard disk drives:
While it is true that a higher revolutions per minute (rpm) rating results in the ability to move data more quickly, there are many applications that do not benefit from increased disk-access speeds. As a result, you should choose only faster drives, which are also usually more expensive per byte of capacity, when you have an application for this type of performance, such as for housing the partition where the operating system resides or for very disk-intensive programs. For comparison, a 7,200 rpm SATA hard drive can sustain data read speeds of about 100 MBps, which is about the same as a PATA ATA/100 7,200 rpm drive. A 10,000 rpm (also known as 10k) SATA drive can top out around 200 MBps.
Higher speeds also consume more energy and produce more heat. The lower speeds can be ideal in laptops, where heat production and battery usage can be issues with higher-speed drives. Even the fastest conventional hard drives are slower than solid-state drives are at transferring data.
Physically, the most common hard drive form factors (sizes) are 3.5" and 2.5". Desktops traditionally use 3.5" drives, whereas the 2.5" drives are made for laptops—although most laptops today avoid using conventional HDDs. Converter kits are available to mount a 2.5" drive into a 3.5" desktop hard drive bay. Figure 2.12 shows the two drives together. As you can see, the 2.5" drive is significantly smaller in all three dimensions, but it does have the same connectors as its bigger cousin.
FIGURE 2.12 A 3.5" and 2.5" hard drive
Unlike conventional hard drives, solid-state drives (SSDs) have no moving parts—they use the same solid-state memory technology found in the other forms of flash memory. You can think of them as big versions of the flash drives that are so common.
Because they have no moving parts, SSDs are capable of transferring data much more quickly than HDDs could ever dream of doing. Recall from the “HDD Speeds” section that a 10k SATA HDD tops out at about 200 MBps. Even the slowest SSDs will run circles around that. The true speed of an SSD will be determined, of course, by the drive itself, but also the interface to which it's attached.
And because there's no need for spinning platters and read/write heads, SSDs can be made much smaller than HDDs, making them better for laptops and portable devices. SSDs have several other advantages over their mechanical counterparts as well, including the following:
The disadvantages of SSDs are as follows:
You will find that SSDs in the market generally have lower overall capacity than HDDs. For example, it's not uncommon to find HDDs over 8 TB in size, with 18 TB drives pacing the market. Conversely, the biggest commercially available SSD (as of this writing) is 8 TB. As for cost, HDDs run about 3 cents per GB and low-end SATA SSDs are about three times as expensive. Faster SSDs such as NVMe drives (which we'll get to in a minute) can be from four to ten times as expensive. Of course, prices are subject to (and it's guaranteed they will) change!
When used as a replacement for traditional HDDs, SSDs are expected to behave in a similar fashion, mainly by retaining contents across a power cycle. With SSD, you can also expect to maintain or exceed the speed of the HDD. SSDs can be made faster still by including a small amount of DRAM as a cache.
SSDs come in various shapes and sizes and have a few different interfaces and form factors. We will cover those in the upcoming “SSD Communication Interfaces” and “SSD Form Factors” sections.
A cost-saving alternative to a standard SSD that can still provide a significant increase in performance over conventional HDDs is the hybrid drive. Hybrid drives can be implemented in two ways: a solid-state hybrid drive and a dual-drive storage solution. Both forms of hybrid drives can take advantage of solutions such as Intel's Smart Response Technology (SRT), which informs the drive system of the most used and highest-value data. The drive can then load a copy of such data into the SSD portion of the hybrid drive for faster read access.
It should be noted that systems on which data is accessed randomly do not benefit from hybrid drive technology. Any data that is accessed for the first time will also not be accessed from flash memory, and it will take as long to access it as if it were accessed from a traditional hard drive. Repeated use, however, will result in the monitoring software's flagging of the data for caching in the SSD.
The solid-state hybrid drive (SSHD) is a conventional HDD manufactured with a substantial amount of flash memory–like solid-state storage aboard. The SSHD is known to the operating system as a single drive, and individual access to the separate components is unavailable to the user.
Dual-drive storage solutions can also benefit from technologies such as Intel's SRT. However, because they are implemented as two separate drives (one conventional HDD and one SSD), each with its own separate file system and drive letter, the user can also manually choose the data to move to the SSD for faster read access. Users can choose to implement dual-drive systems with SSDs of the same size as the HDD, resulting in a fuller caching scenario.
It's been said that the advent of the SSD was a major advancement for the computer industry. Solid-state drives are basically made from the same circuitry that RAM is, and they are really, really fast. When they were first on the market, the limitation in the system was the SATA controller that most hard drives were plugged into. So as enterprising computer engineers are known to do, some started looking into ways to overcome this barrier. The result is that there have been more interfaces designed for storage devices, with much faster speeds.
The CompTIA A+ exam objectives list three technologies as SSD communications interfaces: SATA, PCIe, and NVMe. We will cover them now.
At this point, SATA is a technology you should be somewhat familiar with. We covered it in Chapter 1, and the interface is shown earlier in this chapter in Figure 2.8. SATA is a bit unique among the other subjects in this SSD section because it can support mechanical hard drives as well as SSDs.
SSDs came onto the market in the mid-1990s before SATA was even a thing, but they were limited in popularity because of the major bottleneck—the PATA communications interface. Once SATA came along in the early 2000s, the two technologies felt like glorious companions. SATA 1.x could transfer data at 150 MBps, which was a lot faster than the conventional hard drives at the time. (The most common conventional standard was ATA/100, which maxed out at 100 MBps.) Then, of course, came SATA 2.x (300 MBps) and eventually SATA 3.x (600 MBps). Comparatively, conventional hard drives appear to be painfully slow. And they are.
Keep in mind that of all the SSD technologies we discuss in this chapter, the SATA interface is the slowest of them. So while SATA SSDs are about 6x faster than conventional HDDs, there is a lot of performance upside. SATA SSDs are still popular today because they are plentiful and cheap (compared to other SSDs), and motherboards usually have more SATA connectors than any other type of hard drive connector.
Peripheral Component Interconnect Express (PCIe) is another technology that was covered in Chapter 1 and is used for SSDs. PCIe was first introduced in 2002, technically a year before SATA, and both technologies took a little while to get widely adopted. Like SATA, PCIe has gone through several revisions, with each version being faster than the previous one. Table 2.1 shows the throughput of PCIe versions.
Version | Transfer rate | Throughput per lane (one direction) | Total x16 throughput (bidirectional) |
---|---|---|---|
1.0 | 2.5 GTps | 250 MBps | 8 GBps |
2.0 | 5.0 GTps | 500 MBps | 16 GBps |
3.0 | 8.0 GTps | 1 GBps | 32 GBps |
4.0 | 16.0 GTps | 2 GBps | 64 GBps |
5.0 | 32.0 GTps | 4 GBps | 128 GBps |
TABLE 2.1 PCIe standards and transfer rates
Looking at Table 2.1, something might immediately jump out at you. The transfer rate is specified in gigatransfers per second (GTps). This unit of measure isn't very commonly used, and most people talk about PCIe in terms of its data throughput. Before moving on, though, let's take a quick look at what GTps means. (It's highly unlikely you will be tested on this, but you might be curious.)
PCIe is a serial bus that embeds clock data into the data stream to help the sender and receiver keep track of the order of transmissions. Two clock bits are used for every eight bits of data. Therefore, PCIe uses what's called “8b/10b” encoding—8 bits of data are sent in a 10-bit bundle, which is decoded at the receiving end. Gigatransfers per second refers to the total number of bits sent, whereas the throughput data you might be more used to seeing refers to data only and not the clock.
Also remember that PCIe cards can have different numbers of channels—x1, x2, x4, x8, and x16. So, for example, a PCIe 3.0 x4 card will have total data throughput of 8 GBps (1 GBps for one lane in one direction, times two for bidirectional, times four for the four channels).
What does all of this mean for SSDs? First, take a look at a picture of a PCIe SSD in Figure 2.13. This is a PCIe 2.0 x4 Kingston HyperX Predator. These drives came in capacities up to 960 GB and supported data reads of up to 2.64 GBps and maximum write speeds of 1.56 GBps. The drive is a little dated by today's standards, but it still serves as a great example. First, it's the most common PCIe SSD size, which is x4. PCIe x2 drives are also common, with x8 and x16 drives being relatively rare. Second, notice the transfer speeds. Based on Table 2.1, PCIe 2.0 x4 has a maximum throughput of 4 GBps. This drive doesn't get to that level, especially on write speeds. This is the difference between theoretical maximums of standards versus practical realities of creating hardware. Even so, you can see that this PCIe SSD is significantly faster than a SATA SSD.
FIGURE 2.13 Kingston PCIe x4 SSD
Created by a consortium of manufacturers, including Intel, Samsung, Dell, SanDisk, and Seagate, and released in 2011, Non-Volatile Memory Express (NVMe) is an open standard designed to optimize the speed of data transfers. Unlike SATA and PCIe, NVMe isn't related to a specific type of connector. Said another way, there is no NVMe connector—think of it as a nonvolatile memory chip that can be used in SATA, PCIe, or M.2 (which we will cover in the next section) slots. Figure 2.14 shows a 1 TB Western Digital NVMe SSD.
FIGURE 2.14 M.2 NVMe SSD
NVMe drives are frighteningly fast—current NVMe SSDs can support data reads of up to 3.5 GBps, provided that the interface they are plugged into supports it as well, of course. An NVMe SATA 3 SSD will still be limited to 600 MBps; older PCIe versions in theory might have limitations, but you're not going to find any PCIe 1.0 or 2.0 NVMe SSDs, so it's not a problem.
One potential issue you might see with NVMe hard drives is that in order to be a boot drive, the motherboard must support it. If the motherboard has a built-in M.2 slot, odds are that the BIOS will support booting from an NVMe drive. If you are adding it using a PCIe port, the BIOS might not be able to boot from that drive. Always check the motherboard documentation to ensure it supports what you're trying to do.
Whereas a communications interface is the method the device uses to communicate with other components, a form factor describes the shape and size of a device. The two SSD form factors you need to know for the A+ exam are mSATA and M.2.
The Serial ATA International Organization has developed several specifications—you've already been introduced to SATA and eSATA. Next on the list is a form factor specifically designed for portable devices such as laptops and smaller—mini-Serial ATA (mSATA). mSATA was announced in 2009 as part of SATA version 3.1 and hit the market in 2010.
mSATA uses the same physical layout as the Mini PCI Express (mPCIe) standard, and both have a 30 mm-wide 52-pin connector. The wiring and communications interfaces between the two standards are different though. mSATA uses SATA technology, whereas mPCIe uses PCIe. In addition, mPCIe card types are as varied as their larger PCIe cousins are, including video, network, cellular, and other devices. mSATA, on the other hand, is dedicated to storage devices based on SATA bus standards. mSATA cards come in 30 mm × 50.95 mm full-size and 30mm × 26.8 mm half-size cards. Figure 2.15 shows a full-sized mSATA SSD on top of a 2.5" SATA SSD.
FIGURE 2.15 mSATA SSD and a 2.5" SATA SSD
By Vladsinger - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=30037926
The wiring differences between mSATA and mPCIe can pose some interesting challenges. Both types of cards will fit into the same slot, but it depends on the motherboard as to which is supported. You might have heard us say this before, but as always, check the motherboard's documentation to be sure.
Originally developed under the name Next Generation Form Factor (NGFF), M.2 (pronounced “M dot 2”) was born from the desire to standardize small form factor SSD hard drives. We touched briefly on M.2 in Chapter 1, and we mentioned that although M.2 is primarily used for hard drives, it supports other types of cards such as Wi-Fi, Bluetooth, Global Positioning System (GPS), and near-field communication (NFC) connectivity, as well as PCIe and SATA connections. It's a form factor designed to replace the mSATA standard for ultra-small expansion components in laptops and smaller devices. Whereas mSATA uses a 30mm 52-pin connector, M.2 uses a narrower 22 mm 66-pin connector.
One interesting connectivity feature of M.2 is that the slots and cards are keyed such that only a specific type of card can fit into a certain slot. The keys are given letter names to distinguish them from each other, starting with the letter A and moving up the alphabet as the location of the key moves across the expansion card. Table 2.2 explains the slot names, some interface types supported, and common uses.
Key | Common interfaces | Uses |
---|---|---|
A | PCIe x2, USB 2.0 | Wi-Fi, Bluetooth, and cellular cards |
B | PCIe x2, SATA, USB 2.0, USB 3.0, audio | SATA and PCIe x2 SSDs |
E | PCIe x2, USB 2.0 | Wi-Fi, Bluetooth, and cellular cards |
M | PCIe x4, SATA | PCIe x4 SSDs |
TABLE 2.2 M.2 keying characteristics
Let's look at some examples. Figure 2.16 shows four different M.2 cards. From left to right, they are an A- and E-keyed Wi-Fi card, two B- and M-keyed SSDs, and an M-keyed SSD. Of the four, only the M-keyed SSD can get the fastest speeds (up to 1.8 GBps), because it supports PCIe x4. SSDs on the market are keyed B, M, or B+M. A B-keyed or M-keyed SSD won't fit in a B+M socket. A B+M keyed drive will fit into a B socket or an M socket, however.
Another interesting feature of the cards is that they are also named based on their size. For example, you will see card designations such as 1630, 2242, 2280, 22110, or 3042. The first two numbers refer to the width, and the rest to the length (in millimeters) of the card. In Figure 2.16, you see a 1630, a 2242, and two 2280 cards.
Figure 2.17 shows a motherboard with two M.2 slots. The one on the left is E-keyed, and the one on the right is B-keyed. The left slot is designed for an E-keyed Wi-Fi NIC, and the right one for a B-keyed SSD.
FIGURE 2.16 Four M.2 cards
Photo credit: Andrew Cunningham/Ars Technica
FIGURE 2.17 M.2 E-keyed and B-keyed slots
Photo credit: Andrew Cunningham/Ars Technica
Many motherboards today come with protective covers over the M.2 slots. Adding these covers to provide a bit of safety within the case is a welcome feature. An example is shown in Figure 2.18. The bottom M.2 slot is covered, and the top slot (just above the PCIe x4 connector) has the cover removed. Notice the screw holes to support 42 mm, 60 mm, 80 mm, and 110 mm length devices.
FIGURE 2.18 M.2 connectors covered and uncovered
As mentioned earlier, M.2 is a form factor, not a bus standard. M.2 supports SATA, USB, and PCIe buses. What does that mean for M.2 hard drives? It means that if you purchase an M.2 SATA hard drive, it will have the same speed limitation as SATA III, or about 600 MBps. That's not terrible, but it means that the primary advantage of an M.2 SATA drive versus a conventional SATA SSD is size. An M.2 PCIe hard drive is an entirely different story. PCIe, you will recall, is much faster than SATA. A PCIe 2.0 x1 bus supports one-way data transfers of 500 MBps. That is close to SATA III speed, and it's only a single lane for an older standard. NVMe M.2 drives kick up the speed even further. If you want the gold standard for hard drive speed, NVMe M.2 is the way to go.
To wrap up this section on SSDs, let's look at one more picture showing the difference in sizes between a few options, all from the manufacturer Micron. Figure 2.19 has a 2.5" SSD on top, with (from left to right) a full-sized mSATA drive, an M.2 22110 drive, and an M.2 2280 drive. All are SSDs that can offer the same capacity, but the form factors differ quite a bit from each other.
Multiple hard drives can work together as one system, often providing increased performance (faster disk reads and writes) or fault tolerance (protection against one disk failing). Such systems are called Redundant Array of Independent (or Inexpensive) Disks (RAID). RAID can be implemented in software, such as through the operating system, or in hardware, such as through the motherboard BIOS or a RAID hardware enclosure. Hardware RAID is more efficient and offers higher performance but at an increased cost.
FIGURE 2.19 Four different SSDs
Photo courtesy of TweakTown.com
There are several types of RAID. The following are the most commonly used RAID levels:
RAID 5 RAID 5 combines the benefits of both RAID 0 and RAID 1, creating a redundant striped volume set. Sometimes you will hear it called a stripe set with parity. Unlike RAID 1, however, RAID 5 does not employ mirroring for redundancy. Each stripe places data on n-1 disks, and parity computed from the data is placed on the remaining disk. The parity is interleaved across all the drives in the array so that neighboring stripes have parity on different disks. If one drive fails, the parity information for the stripes that lost data can be used with the remaining data from the working drives to derive what was on the failed drive and to rebuild the set once the drive is replaced.
The same process is used to continue to serve client requests until the drive can be replaced. This process can result in a noticeable performance decrease, one that is predictable because all drives contain the same amount of data and parity. Furthermore, the loss of an additional drive results in a catastrophic loss of all data in the array. Note that while live requests are served before the array is rebuilt, nothing needs to be computed for stripes that lost their parity. Recomputing parity for these stripes is required only when rebuilding the array. A minimum of three drives is required for RAID 5. The equivalent space of one drive is lost for redundancy. The more drives in the array, the less of a percentage this single disk represents.
Figure 2.20 illustrates RAID 1 and RAID 5.
FIGURE 2.20 RAID 1 and RAID 5
RAID 10 Also known as RAID 1+0, RAID 10 adds fault tolerance to RAID 0 through the RAID 1 mirroring of each disk in the RAID 0 striped set. Its inverse, known as RAID 0+1, mirrors a complete striped set to another striped set just like it. Both of these implementations require a minimum of four drives and, because of the RAID 1 component, use half of your purchased storage space for mirroring.
There are other implementations of RAID that are not included in the CompTIA A+ exam objectives. Examples include RAID 3 and RAID 4, which place all parity on a single drive, resulting in varying performance changes upon drive loss. RAID 6 is essentially RAID 5 with the ability to lose two disks and still function. RAID 6 uses the equivalent of two parity disks as it stripes its data and distributed parity blocks across all disks in a fashion similar to that of RAID 5. A minimum of four disks is required to make a RAID 6 array.
Thus far we've focused on storage media that is internal to a PC, but external and removable storage options exist as well. Among the other types of storage available are flash drives, memory cards, optical drives, and external hard drives. The following sections present the details about removable storage solutions.
Once used only for primary memory, the same components that sit on your motherboard as RAM can be found in various physical sizes and quantities among today's solid-state storage solutions. These include older removable and nonremovable flash memory mechanisms, Secure Digital (SD) and other memory cards, and USB flash drives. Each of these technologies has the potential to store reliably a staggering amount of information in a minute form factor. Manufacturers are using innovative packaging for some of these products to provide convenient transport options (such as keychain attachments) to users. Additionally, recall the SSD alternatives to magnetic hard drives mentioned earlier in this chapter.
For many years, modules known as flash memory have offered low- to mid-capacity storage for devices. The name comes from the concept of easily being able to use electricity to alter the contents of the memory instantly. The original flash memory is still used in devices that require a nonvolatile means of storing critical data and code often used in booting the device, such as routers and switches.
For example, Cisco Systems uses flash memory in various forms to store its Internetwork Operating System (IOS), which is accessed from flash during bootup and, in certain cases, throughout operation uptime and therefore during an administrator's configuration sessions. Lesser models store the IOS in compressed form in the flash memory device and then decompress the IOS into RAM, where it is used during configuration and operation. In this case, the flash memory is not accessed again after the boot-up process is complete, unless its contents are being changed, as in an IOS upgrade. Certain devices use externally removable PC Card technology as flash memory for similar purposes.
The following sections explain a bit more about today's most popular forms of flash memory: USB flash drives and memory cards.
USB flash drives are incredibly versatile and convenient devices that enable you to store large quantities of information in a very small form factor. Many such devices are merely extensions of the host's USB connector, extending out from the interface but adding little to its width, making them easy to transport, whether in a pocket or a laptop bag. Figure 2.21 illustrates an example of one of these components and its relative size.
FIGURE 2.21 A USB flash drive
USB flash drives capitalize on the versatility of the USB interface, taking advantage of Windows' Plug and Play, AutoPlay, and Safely Remove Hardware features and the physical connector strength. Upon insertion, these devices announce themselves to Windows File Explorer as removable drives, and they show up in the Explorer window with a drive letter. This software interface allows for drag-and-drop copying and most of the other Explorer functions performed on standard drives. Note that you might have to use the Disk Management utility (discussed in Chapter 13) to assign a drive letter manually to a USB flash drive if it fails to acquire one itself. This can happen in certain cases, such as when the previous letter assigned to the drive has been taken by another device in the USB flash drive's absence.
Today's smaller devices require some form of removable solid-state memory that can be used for temporary and permanent storage of digital information. Modern electronics, as well as most contemporary digital still cameras, use some form of removable memory card to store still images permanently or until they can be copied off or printed out. Of these, the Secure Digital (SD) format has emerged as the preeminent leader of the pack, which includes the older MultiMediaCard (MMC) format on which SD is based. Both of these cards measure 32 mm × 24 mm, and slots that receive them are often marked for both. The SD card is slightly thicker than the MMC and has a write-protect notch (and often a switch to open and close the notch), unlike MMC.
Even smaller devices, such as mobile phones, have an SD solution for them. One of these products, known as miniSD, is slightly thinner than SD and measures 21.5 mm × 20 mm. The other, microSD, is thinner yet and only 15 mm × 11 mm. Both of these reduced formats have adapters that allow them to be used in standard SD slots. Figure 2.22 shows an SD card and a microSD card next to a ruler based on inches.
FIGURE 2.22 Typical SD cards
Table 2.3 lists additional memory card formats, the slots for some of which can be seen in the images that follow the table.
Format | Dimensions | Details | Year introduced |
---|---|---|---|
CompactFlash (CF) | 36 mm × 43 mm | Type I and Type II variants; Type II used by IBM for Microdrive | 1994 |
xD-Picture Card | 20 mm × 25 mm | Used primarily in digital cameras | 2002 |
TABLE 2.3 Additional memory card formats
Figure 2.23 shows the memory-card slots of an HP PhotoSmart printer, which is capable of reading these devices and printing from them directly or creating a drive letter for access to the contents over its USB connection to the computer. Clockwise from the upper left, these slots accommodate CF/Microdrive, SmartMedia, Memory Stick (bottom right), and MMC/SD. The industry provides almost any adapter or converter to allow the various formats to work together.
FIGURE 2.23 Card slots in a printer
Nearly all of today's laptops have built-in memory card slots and many desktops will have readers built into the front or top panel of the case as well. If a computer doesn't have memory card slots built into the case, it's easy to add external card readers. Most are connected via USB, such as the one shown in Figure 2.24 (front first, then back), and are widely available in many different configurations.
FIGURE 2.24 A USB-attached card reader
Many of the removable storage devices mentioned are hot-swappable. This means that you can insert and remove the device with the system powered on. Most USB-attached devices without a filesystem fall into this category. Non–hot-swappable devices, in contrast, either cannot have the system's power applied when they are inserted or removed or have some sort of additional conditions for their insertion or removal. One subset is occasionally referred to as cold-swappable, the other as warm-swappable. The system power must be off before you can insert or remove cold-swappable devices. An example of a cold-swappable device is anything connected to a SATA connector on the motherboard.
Warm-swappable devices include USB flash drives and external drives that have a filesystem. Windows and other operating systems tend to leave files open while accessing them and write cached changes to them at a later time, based on the algorithm in use by the software. Removing such a device without using the Safely Remove Hardware and Eject Media utility can result in data loss. However, after stopping the device with the utility, you can remove it without powering down the system—hence, the warm component of the category's name. These are officially hot-swappable devices.
Hardware-based RAID systems benefit from devices and bays with a single connector that contains both power and data connections instead of two separate connectors. This is known as Single Connector Attachment (SCA). SCA interfaces have ground leads that are longer than the power leads so that they make contact first and lose contact last. SATA power connectors are designed in a similar fashion for the same purpose. This arrangement ensures that no power leads make contact without their singular ground leads, which would often result in damage to the drive. Drives based on SCA are hot-swappable. RAID systems that have to be taken offline before drives are changed out, but the system power can remain on, are examples of warm-swappable systems.
The final category of storage devices we will look at is optical drives. They get their name because instead of storing data using magnetic fields like conventional HDDs, they read and write data with the use of a laser. The laser scans the surface of a spinning plastic disc, with data encoded as small bits and bumps in the track of the disc.
With the popularity of high-speed Internet access and streaming services, optical drives have lost much of their popularity. For about 20 years they were practically required components, but today they are far less common. The most advanced optical storage technology used is the Blu-ray Disc (BD) drive. It replaced the digital versatile disc (DVD), also called digital video disc drive, which in turn replaced the compact disc (CD) drive. Each type of optical drive can also be expected to support the technology that came before it. Such optical storage devices began earnestly replacing floppy diskette drives in the late 1990s. Although, like HDDs, these discs have greater data capacity and increased performance over floppies, they are not intended to replace hard disk drives. HDDs greatly exceed the capacity and read/write performance of optical drives.
The CDs, DVDs, and BDs used for data storage, which may require multiple data reads and writes, are virtually the same as those used for permanent recorded audio and video. The way that data, audio, and video information is written to consumer-recordable versions makes them virtually indistinguishable from such professionally manufactured discs. Any differences that arise are due to the format used to encode the digital information on the disc. Despite the differences among the professional and consumer formats, newer players have no issue with any type of disc used. Figure 2.25 shows an example of an internal 5¼" DVD-ROM drive, which also accepts CD-ROM discs. Modern optical drives are indistinguishable from older ones, aside from obvious markings concerning their capabilities. External drives that connect via USB are more popular (and portable!) than their internal cousins.
FIGURE 2.25 A DVD-ROM drive
The amount of data that can be stored on the three primary formats of optical disc varies greatly, with each generation of disc exceeding the capacity of all previous generations. We'll start with the oldest first to show the progression of technologies.
When CDs first were used with computers, they were a huge change from floppy disks. Instead of installing the program of the day using 100 floppy disks, you could use a single CD-ROM, which can hold approximately 650 MB in its original, least-capable format. Although CDs capable of storing 700 MB eventually became and continue to be the most common, discs with 800 MB and 900 MB capacities have been standardized as well.
CDs were rather limited in technology, though. For example, data could only be written to one side, and only one layer of data was permitted on that side. DVDs came along with much higher base capacity, but also the ability to store on both sides and have two layers of data on each side.
The basic DVD disc is still a single-sided disc that has a single layer of encoded information. These discs have a capacity of 4.7 GB, over five times the highest CD capacity. Simple multiplication can sometimes be used to arrive at the capacities of other DVD varieties. For example, when another media surface is added on the side of the disc where the label is often applied, a double-sided disc is created. Such double-sided discs (DVD DS, for double-sided) have a capacity of 9.4 GB, exactly twice that of a single-sided disc.
Practically speaking, the expected 9.4 GB capacity from two independent layers isn't realized when those layers are placed on the same side of a DVD, resulting in only 8.5 GB of usable space. This technology is known as DVD DL (DL for dual-layer), attained by placing two media surfaces on the same side of the disc, one on top of the other, and using a more sophisticated mechanism for reading and writing. The loss of capacity is due to the space between tracks on both layers being 10 percent wider than normal to facilitate burning one layer without affecting the other. This results in about 90 percent remaining capacity per layer. Add the DL technology to a double-sided disc, and you have a disc capable of holding 17.1 GB of information—again, twice the capacity of the single-sided version.
The current generation of optical storage technology, Blu-ray, was designed for modern high-definition video sources. The equipment used to read the resulting discs employs a violet laser, in contrast to the red laser used with standard DVD and CD technologies. Taking a bit of creative license with the color of the laser, the Blu-ray Disc Association named itself and the technology Blu-ray Disc (BD), after this visibly different characteristic. Blu-ray technology further increases the storage capacity of optical media without changing the form factor. On a 12 cm disc, similar to those used for CDs and DVDs, BD derives a 25 GB storage capacity from the basic disc. When you add a second layer to the same or opposite side of the disc, you attain 50 GB of storage. The Blu-ray laser is of a shorter wavelength (405nm) than that of DVD (650nm) and CD (780nm) technologies. As a result, and through the use of refined optics, the laser can be focused on a much smaller area of the disc. This leads to a higher density of information being stored in the same area.
Optical drives are rated in terms of their data transfer speed. The first CD-ROM drives transferred data at the same speed as home audio CD players, 150 KBps, referred to as 1X. Soon after, CD drives rated as 2X drives that would transfer data at 300 KBps appeared. They increased the spin speed in order to increase the data transfer rate. This system of ratings continued up until the 8X speed was reached. At that point, the CDs were spinning so fast that there was a danger of them flying apart inside the drive. So, although future CD drives used the same rating (as in 16X, 32X, and so on), their rating was expressed in terms of theoretical maximum transfer rate; 52X is widely regarded as the highest multiplier for data CDs. Therefore, the drive isn't necessarily spinning faster, but through electronics and buffering advances, the transfer rates continued to increase.
The standard DVD-ROM 1X transfer rate is 1.4 MBps, already nine times that of the comparably labeled CD-ROM. As a result, to surpass the transfer rate of a 52X CD-ROM drive, a DVD-ROM drive need only be rated 6X. DVD transfer rates of 24X at the upper end of the scale are common.
The 1X transfer rate for Blu-ray is 4.5 MBps, roughly 3¼ times that of the comparable DVD multiplier and close to 30 times that of the 1X CD transfer rate. It takes 2X speeds to play commercial Blu-ray videos properly, and 16X drives are common today.
Years after the original factory-made CD-ROM discs and the drives that could read them were developed, the industry, strongly persuaded by consumer demand, developed discs that, through the use of associated drives, could be written to once and then used in the same fashion as the original CD-ROM discs. The firmware with which the drives were equipped could vary the power of the laser to achieve the desired result. At standard power, the laser allowed discs inserted in these drives to be read. Increasing the power of the laser allowed the crystalline media surface to be melted and changed in such a way that light would reflect or refract from the surface in microscopic increments. This characteristic enabled mimicking the way in which the original CD-ROM discs stored data.
Eventually, discs that could be written to, erased, and rewritten were developed. Drives that contained the firmware to recognize these discs and control the laser varied the laser's power in three levels. The original two levels closely matched those of the writable discs and drives. The third level, somewhere in between, could neutralize the crystalline material without writing new information to the disc. This medium level of power left the disc surface in a state similar to its original, unwritten state. Subsequent high-power laser usage could write new information to the neutralized locations. Drives capable of writing to optical discs are known as burners, because they essentially burn a new image into the disc.
Two different types of writable CD are available. The first type is one that is recordable (-R), and the second is rewritable (-RW). For the first (CD-R), data is written once and then the disc is finalized. With rewritable CDs (CD-RW), data can be rewritten multiple times. Note that over time and with several rewrites, these types of discs can become unstable.
Burnable DVDs use similar nomenclature to CDs, with a notable twist. In addition to DVD-R and DVD-RW, there are “plus” standards of DVD+R and DVD+RW. This is thanks to there being two competing DVD consortiums, each with their own preferred format. The “plus” standards come from the DVD+RW Alliance, whereas the “dash” counterparts are specifications of the DVD Forum. The number of sectors per disc varies between the “plus” and “dash” variants, so older drives might not support both types. The firmware in today's drives knows to check for all possible variations in encoding and capability. You shouldn't run into problems today, but it is possible.
Finally, the Blu-ray Disc Association duplicated the use of the -R suffix to denote a disc capable of being recorded only once by the consumer. Instead of the familiar -RW, however, the association settled on -RE, short for re-recordable. As a result, watch for discs labeled BD-R and BD-RE. Dual-layer versions of these discs can be found as well. Table 2.4 draws together the most popular optical-disc formats and lists their respective capacities. Figures in bold in the table are the most common industry-quoted capacities.
Disc format | Capacity |
---|---|
CD SS (includes recordable versions) | 650 MB, 700 MB, 800 MB, 900 MB |
DVD-R/RW SS, SL | 4.71 GB (4.7 GB) |
DVD+R/RW SS, SL | 4.70 GB (4.7 GB) |
DVD-R, DVD+R DS, SL | 9.4 GB |
DVD-R SS, DL | 8.54 GB (8.5 GB) |
DVD+R SS, DL | 8.55 GB (8.5 GB) |
DVD+R DS, DL | 17.1 GB |
BD-R/RE SS, SL | 25 GB |
BD-R/RE SS, DL | 50 GB |
BD-R/RE DS, DL | 100 GB |
SS = single-sided; DS = double-sided; SL = single-layer; DL = dual-layer
TABLE 2.4 Optical discs and their capacities
The removal and installation of storage devices, such as hard drives and optical drives, is pretty straightforward. There really isn't any deviation in the process of installing or exchanging the hardware. Fortunately, with today's operating systems, little to no configuration is required for such devices. The Plug and Play BIOS and operating system work together to recognize the devices. However, you still have to partition and format out-of-the-box hard drives before they will allow the installation of the operating system. Nevertheless, today's operating systems allow for a pain-free partition/format/setup experience by handling the entire process, if you let them.
Removing any component is frequently easier than installing the same part. Consider the fact that most people could destroy a house, perhaps not safely enough to ensure their well-being, but they don't have to know the intricacies of construction to start smashing away. On the other hand, very few people are capable of building a house. Similarly, many could figure out how to remove a storage device, as long as they can get into the case to begin with, but only a few could start from scratch and successfully install one without tutelage.
In Exercise 2.1, you'll remove an internal storage device.
An obvious difference among storage devices is their form factor. This is the term used to describe the physical dimensions of a storage device. Form factors commonly have the following characteristics:
You will need to determine whether you have an open bay in the chassis to accommodate the form factor of the storage device that you want to install. Adapters exist that allow a device of small size to fit into a larger bay. For obvious reasons, the converse is not also true.
In Exercise 2.2, you'll install an internal storage device.
The computer's components would not be able to operate without power. The device in the computer that provides this power is the power supply (see Figure 2.26). A power supply converts 110V or 220V AC current into the DC voltages that a computer needs to operate. These are +3.3VDC, +5VDC, –5VDC (on older systems), +12VDC, and –12VDC. The jacket on the leads carrying each type of voltage has a different industry-standard color-coding for faster recognition. Black ground leads offer the reference that gives the voltage leads their respective magnitudes. The +3.3VDC voltage was first offered on ATX motherboards.
FIGURE 2.26 A desktop power supply
Throughout this section, you will see us use the terms watts, volts, and amps. If you're working with electricity a lot, you might also see the term ohms. To help understand what these terms mean, let's use an analogy of water flowing through a pipe. Amps would be the amount of water flowing through the pipe; voltage would be the water pressure; and watts would be the power that the water could provide. (Watts mathematically are volts × amps.) If there were a filter or other barrier in the pipe, that would provide resistance, which is measured in ohms. In non-analogous terms, amps are the unit of current flow; volts are the unit of force; watts are the unit for power (watts = volts × amps); and ohms are resistance.
Computer power supplies need to get their power from somewhere, and that is typically a wall outlet. There may be an intermediary battery backup device in-between called an uninterruptible power supply (UPS), which we will talk about later in the “Battery Backup Systems” section, but the point is that the power supply doesn't just generate its own power. It converts AC power from the wall into DC power that components use.
Countries have differing standards on the voltage provided by wall outlets. In the United States, it's typically 110V and 220V. The 110V outlets are the “normal” outlets that most electronics, including computers, are plugged into. The 220V outlets are for high-energy devices such as electric ranges and clothes dryers. Fortunately, the two plugs are completely different (as shown in Figure 2.27) to help us avoid plugging the wrong thing into the wrong place and frying the component. As noted, though, other countries have different standards, and power supply manufacturers want to ensure their devices work in different countries.
FIGURE 2.27 110V (left) and 220V (right) wall outlets
Therefore, some power supplies have a recessed, two-position slider switch, often a red one, on the rear that is exposed through the case. You can see the one for the power supply shown in Figure 2.26. Dual-voltage options on such power supplies read 110 and 220, 115 and 230, or 120 and 240. This selector switch is used to adjust for the voltage level used in the country where the computer is in service. As noted earlier, in the United States, the power grid supplies anywhere from 110V to 120V. However, in Europe, for instance, the voltage supplied is double, ranging from 220V to 240V.
Although the voltage is the same as what is used in the United States to power high-voltage appliances, the amperage is much lower. The point is, the switch is not there to allow multiple types of outlets to be used in the same country. If the wrong voltage is chosen in the United States, the power supply will expect more voltage than it receives and might not power up at all. If the wrong voltage is selected in Europe, however, the power supply will receive more voltage than it is set for. The result could be disastrous for the entire computer and could result in sparking or starting a fire. Always check the switch before powering up a new or recently relocated computer. In the United States and other countries that use the same voltage, check the setting of this switch if the computer fails to power up.
Power supplies all provide the same voltages to a system, such as +3.3V, +5V, and +12V. Each of these can be referred to as a rail, because each one comes from a specific tap (or rail) within the power supply. Some power supplies provide multiple 12V rails in an effort to supply more power overall to components that require 12V. For instance, in dual-rail power supplies, one rail might be dedicated to the CPU, while the other is used to supply power to all of the other components that need 12V.
The problem that can arise in high-powered systems is that although the collective power supplied by all rails is greater than that supplied by power supplies with a single rail, each rail provides less power on its own. As a result, it is easier to overdraw one of the multiple rails in such a system, causing a protective shutdown of the power supply. Care must be taken to balance the load on each of the rails if a total amperage greater than any one rail is to be supplied to attached components. Otherwise, if the total power required is less than any single rail can provide, there is no danger in overloading any one rail.
Power supplies are rated in watts. A watt is a unit of power. The higher the number, the more power your computer can draw from the power supply. Think of this rating as the “capacity” of the device to supply power. Most computers require power supplies in the 350- to 500-watt range. Higher wattage power supplies, say 750- to 900-watt, might be required for more advanced systems that employ power-hungry graphics cards or multiple disk drives, for instance. As of this writing, power supplies of up to 2,000 watts were available for desktop machines. It is important to consider the draw that the various components and subcomponents of your computer place on the power supply before choosing one or its replacement.
The connectors coming from the power supply are quite varied these days. Some PSUs will have connectors permanently attached, where other PSUs give you the ability to attach and detach power connectors as needed, based on the devices installed in the system. The following sections detail and illustrate the most common power connectors in use today.
ATX motherboards use a single block connector from the power supply. When ATX boards were first introduced, this connector was enough to power all the motherboard, CPU, memory, and all expansion slots. The original ATX system connector provides the six voltages required, plus it delivers them all through one connector: an easy-to-use single 20-pin connector. Figure 2.28 shows an example of an ATX system connector.
FIGURE 2.28 20-pin ATX power connector
When the Pentium 4 processor was introduced, it required much more power than previous CPU models. Power measured in watts is a multiplicative function of voltage and current. To keep the voltage low meant that amperage would have to increase, but it wasn't feasible to supply such current from the power supply itself. Instead, it was decided to deliver 12V at lower amperage to a voltage regulator module (VRM) near the CPU. The higher current at a lower voltage was possible at that shorter distance from the CPU.
As a result of this shift, motherboard and power supply manufacturers needed to get this more varied power to the system board. The solution was the ATX12V 1.0 standard, which added two supplemental connectors. One was a single 6-pin auxiliary connector that supplied additional +3.3V and +5V leads and their grounds. The other was a 4-pin square mini-version of the ATX connector, referred to as a P4 (for the processor that first required them) connector, which supplied two +12V leads and their grounds. EPS12V uses an 8-pin version, called the processor power connector, which doubles the P4's function with four +12V leads and four grounds. Figure 2.29 illustrates the P4 connector. The 8-pin processor power connector is similar but has two rows of 4 and, despite its uncanny resemblance, is keyed differently from the 8-pin PCIe power connector to be discussed shortly.
FIGURE 2.29 ATX12V P4 power connector
PCIe devices require more power than PCI ones did. So, for ATX motherboards with PCIe slots, the 20-pin system connector proved inadequate. This led to the ATX12V 2.0 standard and the even higher-end EPS12V standard for servers. These specifications call for a 24-pin connector that adds further positive voltage leads directly to the system connector. The 24-pin connector looks like a larger version of the 20-pin connector. The corresponding pins of the 24-pin motherboard header are actually keyed to accept the 20-pin connector. Adapters are available if you find yourself with the wrong combination of motherboard and power supply. Some power supplies feature a 20-pin connector that snaps together with a separate 4-pin portion for flexibility, called a 20+4 connector, which can be seen in Figure 2.30. Otherwise, it will just have a 24-pin connector. The 6-pin auxiliary connector disappeared with the ATX12V 2.0 specification and was never part of the EPS12V standard.
FIGURE 2.30 A 24-pin ATX12V 2.x connector, in two parts
ATX12V 2.1 introduced a different 6-pin connector, which was shaped a lot like the P4 connector (see Figure 2.31). This 6-pin connector was specifically designed to give additional dedicated power to the PCIe adapters that required it. It provided a 75W power source to such devices.
FIGURE 2.31 A 6-pin ATX12V 2.1 PCIe connector
ATX12V 2.2 replaced the 75W 6-pin connector with a 150W 8-pin connector, as shown in Figure 2.32. The plastic bridge between the top two pins on the left side in the photo keeps installers from inserting the connector into the EPS12V processor power header but clears the notched connector of a PCIe adapter. The individual pin keying should avoid this issue, but a heavy-handed installer could defeat that. The bridge also keeps the connector from being inserted into a 6-pin PCIe header, which has identically keyed corresponding pins.
FIGURE 2.32 An 8-pin ATX12V 2.2 PCIe connector
Although the internal peripheral devices have standard power connectors, manufacturers of computer systems sometimes take liberties with the power interface between the motherboard and power supply of their systems. It's uncommon but not unheard of. In some cases, the same voltages required by a standard ATX power connector are supplied using one or more proprietary connectors. This makes it virtually impossible to replace power supplies and motherboards with other units “off the shelf.” Manufacturers might do this to solve a design issue or simply to ensure repeat business.
SATA drives arrived on the market with their own power requirements in addition to their new data interfaces. (Refer back to Figure 2.9 to see the SATA data and power connectors.) You get the 15-pin SATA power connector, a variant of which is shown in Figure 2.33. The fully pinned connector is made up of three +3.3V, three +5V, and three +12V leads interleaved with two sets of three ground leads. Each of the five sets of three common pins is supplied by one of five single conductors coming from the power supply. When the optional 3.3V lead is supplied, it is standard to see it delivered on an orange conductor.
FIGURE 2.33 SATA power connector
Note that in Figure 2.33, the first three pins are missing. These correspond to the 3.3V pins, which are not supplied by this connector. This configuration works fine and alludes to the SATA drives' ability to accept Molex connectors or adapters attached to Molex connectors, thus working without the optional 3.3V lead.
FIGURE 2.34 Molex power connector
On older PSUs, all power connectors were hardwired into the power supply itself. This had a number of interesting side effects. One was that no matter how many or how few internal devices were present, there were a fixed number of connectors. Power supply manufacturers generally provided enough so most users wouldn't run short of connectors. The flip side of this was that there were often four to six unused connectors, but the cables were still taking up space inside the case. Zip ties and thick rubber bands helped maintain the chaos.
As the variety of internal components became more complex, the need arose to have more flexibility in terms of the connectors provided. Out of this need rose an elegant solution—the modular power supply. From a functional standpoint, it works just as a non-modular power supply does. The difference is that none of the power cables are permanently attached. Only the ones that are needed are connected. Figure 2.35 shows the side of a fully modular power supply. The top row has connectors for the motherboard (left and center) and CPU or PCIe device. On the bottom row, you can see four 6-pin peripheral connectors and three 8-pin ones to power the CPU or PCIe devices.
FIGURE 2.35 Modular power supply
You will also see semi-modular PSUs on the market. Generally, the motherboard and CPU connectors will be hardwired, whereas the peripheral connectors can be added as needed. There are two potential disadvantages to using a fully modular or semi-modular power supply. First, some PSU manufacturers use proprietary connectors. Always be sure to keep the extra power connectors around (many come with a bag to store unused cables) just in case they are needed. Second, modular PSUs can take up a little more room in the case. Plugging the power connectors into the PSU can take up an extra ¼ or ½ inch. Usually this isn't an issue, but it can be in smaller cases.
Nearly every computer you will work with has one and only one power supply—is that enough? If the PSU supplies the right amount of wattage to safely power all components, then the answer is nearly always yes. There are some instances, though, where power redundancy is helpful or even critical. Within the realm of power redundancy, there are two paths you can take: redundant power supplies within a system or battery backups. Let's look at both.
It's almost unheard of to see two power supplies installed in a desktop computer. There's generally no need for such a setup and it would just be a waste of money. And for laptops and mobile devices, it's simply not an option. For servers, though, having a redundant power supply (RPS), meaning a second PSU installed in the system, might make sense. The sole reason to have two power supplies is in case one fails, the other can take over. The transition between the two is designed to be seamless and service will not be disrupted.
Based on its name and our description so far, it might seem as though this means installing two full-sized PSUs into a computer case. Given the limited amount inside a case, you can imagine how problematic this could be. Fortunately, though, PSU manufacturers make devices that have two identical PSUs in one enclosure. One such example is shown in Figure 2.36. The total device is designed to fit into ATX cases and is compliant with ATX12V and EPS12V standards. If one fails, the other automatically takes over. They are hot-swappable, so the failed unit can be replaced without powering the system down.
FIGURE 2.36 Hot-swappable redundant PSUs
Photo: Rainer Knäpper, Free Art License, http://artlibre.org/licence/lal/en/
,
https://commons.wikimedia.org/wiki/File:PC-Netzteil_(redundant).jpg
Although an RPS can help in the event of a PSU failure, it can't keep the system up and running if there is a power outage.
The second type of power redundancy is a battery backup system that the computer plugs into. This is commonly referred to as an uninterruptible power supply (UPS).
These devices can be as small as a brick, like the one shown in Figure 2.37, or as large as an entire server rack. Some just have a few indicator lights, whereas others have LCD displays that show status and menus and come with their own management software. The back of the UPS will have several power plugs. It might divide the plugs such that a few of them provide surge protection only, whereas others provide surge protection as well as backup power, as shown in Figure 2.38.
FIGURE 2.37 An uninterruptible power supply
FIGURE 2.38 The back of an uninterruptible power supply
Inside the UPS are one or more batteries and fuses. Much like a surge suppressor, a UPS is designed to protect everything that's plugged into it from power surges. UPSs are also designed to protect against power sags and even power outages. Energy is stored in the batteries, and if the power fails, the batteries can power the computer for a period of time so that the administrator can then safely power it down. Many UPSs and operating systems will also work together to safely power down automatically a system that gets switched to UPS power. These types of devices may be overkill for Uncle Bob's machine at home, but they're critically important fixtures in server rooms.
The UPS should be checked periodically to make sure that its battery is operational. Most UPSs have a test button that you can press to simulate a power outage. You will find that batteries wear out over time, and you should replace the battery in the UPS every couple of years to keep the UPS dependable.
Sometimes power supplies fail. Sometimes you grow out of your power supply and require more wattage than it can provide. Often, it is just as cost effective to buy a whole new case with the power supply included rather than dealing with the power supply alone. However, when you consider the fact that you must move everything from the old case to the new one, replacing the power supply becomes an attractive proposition. Doing so is not a difficult task.
Regardless of which path you choose, you must make sure the power connection of the power supply matches that of the motherboard to be used. Additionally, the physical size of the power supply should factor into your purchasing decision. If you buy a standard ATX-compatible power supply, it might not fit in the petite case you matched up to your micro ATX motherboard. In that scenario, you should be on the lookout for a smaller form factor power supply to fit the smaller case. Odds are that the offerings you find will tend to be a little lighter in the wattage department as well.
Exercise 2.3 details the process to remove an existing power supply. Use the reverse of this process to install the new power supply. Just keep in mind that you might need to procure the appropriate adapter if a power supply that matches your motherboard can no longer be found. There is no post-installation configuration for the power supply, so there is nothing to cover along those lines. Many power supply manufacturers have utilities on their websites that allow you to perform a presale configuration so that you are assured of obtaining the most appropriate power supply for your power requirements.
Just as the power supply in a desktop computer converts AC voltages to DC for the internal components to run on, the AC adapter of a laptop computer converts AC voltages to DC for the laptop's internal components. And AC adapters are rated in watts and selected for use with a specific voltage just as power supplies are rated. One difference is that AC adapters are also rated in terms of DC volts out to the laptop or other device, such as certain brands and models of printer.
Because both power supplies and AC adapters go bad on occasion, you should replace them both and not attempt to repair them yourself. When replacing an AC adapter, be sure to match the size, shape, and polarity of the tip with the adapter you are replacing. However, because the output DC voltage is specified for the AC adapter, be sure to replace it with one of equal output voltage, an issue not seen when replacing AT or ATX power supplies, which have standard outputs. Additionally, and as with power supplies, you can replace an AC adapter with a model that supplies more watts to the component because the component uses only what it needs.
You can read more on this subject later in Chapter 9, “Laptop and Mobile Device Hardware.”
In this chapter, you learned about three classes of personal computer components that finish our tour of the inside of the case—expansion cards, storage devices, and power supplies.
Expansion cards add helpful capabilities, such as video, audio, network connections, and additional ports for devices and peripherals. Storage devices provide long-term data capacity. Examples include conventional spinning hard drives and SSDs. RAID arrays can help provide performance increases and additional data protection. Other removable storage devices include flash drives, memory cards, and optical drives.
Finally, we discussed power supply safety as well as the various connectors, and we compared and contrasted power supplies and AC adapters. You also learned how to remove, install, and configure storage devices and how to replace power supplies.
Know how to install and configure expansion cards to provide needed functionality. Understand the functionality that video cards, sound cards, network cards, and capture cards provide. Know where to install them and broadly how to configure them.
Be familiar with the components of a conventional hard drive system and the anatomy of a hard drive. Most of today's hard drive systems consist of an integrated controller and disc assembly that communicates to the rest of the system through an external host adapter. The hard disk drives consist of many components that work together, some in a physical sense and others in a magnetic sense, to store data on the disc surfaces for later retrieval. Be familiar with magnetic hard drive speeds, including 5,400, 7,200, 10,000, and 15,000 rpm. Form factors are 2.5" and 3.5".
Understand the advantages that solid-state drives have over conventional drives. SSDs are much faster than magnetic hard drives, produce less heat, and can be made much smaller physically. They are also less susceptible to shock from drops.
Know the differences between three SSD communications interfaces and two form factors. The SSD communications interfaces are NVMe, SATA, and PCIe. The two form factors to know are M.2 and mSATA.
Understand the details surrounding optical storage. From capacities to speeds, you should know what the varieties of optical storage offer as well as the specifics of the technologies this storage category comprises.
Understand the different flash drive and memory card options available. Know the differences between SD cards, CompactFlash, microSD, miniSD, and xD. Be able to identify which cards can fit into specific types of slots natively or with an adapter.
Understand the characteristics of four types of RAID configurations. You need to know RAID 0, RAID 1, RAID 5, and RAID 10. RAID 0 is disk striping, which can improve speed but does not provide fault tolerance. RAID 1 is disk mirroring, which gives fault tolerance but no performance increase. RAID 5 is striping with parity, which can give some performance boost along with fault tolerance. RAID 10, also known as RAID 1+0, adds mirroring to a striped set. Understand what hot-swappable means.
Know about power supplies and their connectors. Power supplies are commonly made in ATX and other, smaller form factors. Regardless of their type, power supplies must offer connectors for motherboards and internal devices. Know the differences among the connectors and why you might need a 20-pin to 24-pin motherboard adapter. Also understand why AC adapters are related to power supplies.
Understand power supply characteristics that determine performance. Power supplies can take input from 115V to 220V and often have a switch on the back to determine which source to expect. Output to internal components will be 2.4V, 5V, and 12V. Capacity is measured in watts. There are also redundant and modular power supplies.
Know how to remove, install, and configure storage devices. Know the difference between the data and power connectors used on storage devices. Know what it means to partition and format a hard drive. Be aware of the physical differences in storage device form factors.
Know how to remove, install, and configure power supplies. Know the difference between the modern motherboard power headers and know when an adapter might be required. Be familiar with how to fasten power supplies to the chassis as well as how to unfasten them.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answer compares to the authors', refer to Appendix B.
Detail the process for removing a power supply from a computer chassis.
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Thus far, our discussion of computer components has focused primarily on those inside the case. With knowledge of the key internal components under your belt, it is time to turn our attention to the outside of the computer. Dozens of external devices are available to enhance a computer's functionality. We'll cover a variety of them that add video, audio, input, output, and storage capabilities.
Of course, to connect external peripherals, we need some sort of cable and connector. Not everything is wireless yet! Consequently, we will also discuss the interfaces and cables associated with common peripherals. With that, it's now time to think outside the box.
Peripheral devices add much-needed functionality to computers, beyond the core components. Having a fast processor and terabytes of hard drive space is great, but it doesn't complete the picture. Users need the ability to input data and easily see and use the output that the processor generates. Of course, the types of devices that can input or receive output are quite varied. In the following sections, we are going to break peripheral devices into four categories:
We realize that video and audio are indeed input or output devices, but because they are more specialized, we will cover them separately. After this section, you will have a good understanding of purposes of and uses for several common peripheral devices, as well as how they connect to a PC.
The primary method of getting information out of a computer is to use a computer video display. Display systems convert computer signals into text and pictures and display them on a TV-like screen. As a matter of fact, early personal computers used television screens because it was simpler to use an existing display technology than to develop a new one. The most common video device used is a monitor.
Most display systems work the same way. First, the computer sends a signal to a device called the video adapter—an expansion board installed in an expansion bus slot or the equivalent circuitry integrated into the motherboard—telling it to display a particular graphic or character. The adapter then renders the character for the display—that is, it converts the single instruction into several instructions that tell the display device how to draw the graphic and sends the instructions to the display device based on the connection technology between the two. The primary differences after that are in the type of video adapter you are using (digital or analog) and the type of display (LCD, LED, IPS, and so forth).
PC monitors today are generally based on some form of liquid crystal display (LCD) technology. First used with portable computers and then adapted to desktop monitors, LCDs are based on the concept that when an electrical current is passed through a semi-crystalline liquid, the crystals align themselves with the current. When transistors are combined with these liquid crystals, patterns can be formed. Patterns can then be combined to represent numbers or letters. LCDs are relatively lightweight and don't consume much power.
Liquid crystals produce no light, so LCD monitors need a lighting source to display an image. Traditional LCDs use a fluorescent bulb called a backlight to produce light. Most LCDs today use a panel of light-emitting diodes (LEDs) instead, which consume less energy, run cooler, and live longer than fluorescent bulbs. Therefore, when you see a monitor advertised as an LED monitor, it's really an LCD monitor with LED backlighting.
Another type of LED monitor is an organic light-emitting diode (OLED) display. Unlike LED displays, OLEDs are the image-producing parts of the display and the light source. Because of this there is no need for a backlight with its additional power and space requirements, unlike in the case of LCD panels. Additionally, the contrast ratio of OLED displays exceeds that of LCD panels, regardless of backlight source. This means that in darker surroundings, OLED displays produce better images than do LCD panels. In addition, if thin-film electrodes and a flexible compound are used to produce the OLEDs, an OLED display can be made flexible, allowing it to function in novel applications where other display technologies could never work. OLED monitors are usually high-quality displays.
Other acronyms you will see when looking at LCD monitors include twisted nematic (TN), vertical alignment (VA), and in-plane switching (IPS). The short explanation of the differences is that each technology aligns the liquid crystals in a different manner, resulting in performance differences. Generally speaking, TN monitors are the fastest but have the worst color performance and contrast ratios (the difference between black and lit pixels), whereas VA are the slowest but have the most vivid color contrasts. IPS is somewhere in between the two. The speed aspect often makes TN the choice for gamers.
Although most monitors are automatically detected by the operating system and configured for the best quality that they and the graphics adapter support, sometimes manually changing display settings, such as for a new monitor or when adding a new adapter, becomes necessary. Let's start by defining a few important terms:
Each of these terms relates to settings available through the operating system by way of display-option settings.
Refresh Rate The refresh rate—technically, the vertical scan frequency—specifies how many times in one second the image on the screen can be completely redrawn, if necessary. Measured in screen draws per second, or hertz (Hz), the refresh rate indicates how much effort is being put into checking for updates to the displayed image.
For LCD screens, the refresh rate may or may not be adjustable. The lowest standard refresh rate is 60 Hz, but higher-end monitors will be in the 240 Hz to 360 Hz range.
Higher refresh rates translate to more fluid video motion. Think of the refresh rate as how often a check is made to see if each pixel has been altered by the source. If a pixel should change before the next refresh, the monitor is unable to display the change in that pixel. Therefore, for gaming and home-theater systems, higher refresh rates are an advantage.
While the refresh rate is selected for the monitor, the refresh rate you select must be supported by both your graphics adapter and your monitor because the adapter drives the monitor. If a monitor supports only one refresh rate, it does not matter how many different rates your adapter supports—without overriding the defaults, you will be able to choose only the one common refresh rate. It is important to note that as the resolution you select increases, the higher supported refresh rates begin to disappear from the selection menu. If you want a higher refresh rate, you might have to compromise by choosing a lower resolution. Exercise 3.1 shows you where to change the refresh rate in Windows 10.
Resolution Resolution is defined by how many software picture elements (pixels) are used to draw the screen. An advantage of higher resolutions is that more information can be displayed in the same screen area. A disadvantage is that the same objects and text displayed at a higher resolution appear smaller and might be harder to see. Up to a point, the added crispness of higher resolutions displayed on high-quality monitors compensates for the negative aspects.
The resolution is described in terms of the visible image's dimensions, which indicate how many rows and columns of pixels are used to draw the screen. For example, a resolution of 2560 × 1440 means 2560 pixels across (columns) and 1440 pixels down (rows) were used to draw the pixel matrix. The video technology in this example would use 2560 × 1440 = 3,686,400 pixels to draw the screen. Resolution is a software setting that is common among CRTs, LCDs, and projection systems, as well as other display devices.
Setting the resolution for your monitor is fairly straightforward. If you are using an LCD, for best results you should use the monitor's native resolution, which comes from the placement of the transistors in the hardware display matrix of the monitor. For a native resolution of 1680 × 1050, for example, there are 1,764,000 transistors (LCDs) or cells (OLED) arranged in a grid of 1680 columns and 1050 rows. Trying to display a resolution other than 1680 × 1050 through the operating system tends to result in the monitor interpolating the resolution to fit the differing number of software pixels to the 1,764,000 transistors, often resulting in a distortion of the image on the screen.
Some systems will scale the image to avoid distortion, but others will try to fill the screen with the image, resulting in distortion. On occasion, you might find that increasing the resolution beyond the native resolution results in the need to scroll the desktop in order to view other portions of it. In such instances, you cannot see the entire desktop all at the same time. The monitor has the last word in how the signal it receives from the adapter is displayed. Adjusting your display settings to those that are recommended for your monitor can alleviate this scrolling effect.
To change the resolution in Windows 10, right-click the desktop and choose Display Settings (as in Exercise 3.1). There is a pull-down menu for resolution. Click it and choose the resolution you want, as shown in Figure 3.5.
FIGURE 3.5 Adjusting the resolution in Windows 10
Multiple Displays Whether regularly or just on occasion, you may find yourself in a position where you need to use two monitors on the same computer simultaneously. For example, you may need to work in multiple spreadsheets at the same time and having two monitors makes it much easier. Or, if you are giving a presentation and would like to have a presenter's view on your laptop's LCD but need to project a slide show onto a screen, you might need to connect an external projector to the laptop. Simply connecting an external display device does not guarantee that it will be recognized and work automatically. You might need to change the settings to recognize the external device or adjust options such as the resolution or the device's virtual orientation with respect to the built-in display. Exercise 3.2 guides you through this process.
When you have dual displays, you have the option to extend your desktop onto a second monitor or to clone your desktop on the second monitor. To change the settings for multiple monitors in Windows 10, follow the steps in Exercise 3.2, after ensuring that you have a second monitor attached.
If you go to your favorite online retailer and search for monitors, the number of choices can be overwhelming. Here are a few tips to help narrow the field to a manageable number of options.
Another major category of display device is the video projection system, or projector. A portable projector can be thought of as a condensed video display with a lighting system that projects the image onto a screen or other flat surface for group viewing. Interactive whiteboards have become popular over the past decade to allow presenters to project an image onto the board as they use virtual markers to draw electronically on the displayed image. Remote participants can see the slide on their system as well as the markups made by the presenter. The presenter can see the same markups because the board transmits them to the computer to which the projector is attached, causing them to be displayed by the projector in real time.
To accommodate using portable units at variable distances from the projection surface, a focusing mechanism is included on the lens. Other adjustments, such as keystone, trapezoid, and pincushion, are provided through a menu system on many models as well as a way to rotate the image 180 degrees for ceiling-mount applications.
The key characteristics of projectors are resolution and brightness. Resolutions are similar to those of computer monitors. Brightness is measured in lumens. A lumen (lm) is a unit of measure for the total amount of visible light that the projector gives off, based solely on what the human eye can perceive and not on invisible wavelengths. Sometimes the brightness is even more of a selling point than the maximum resolution that the system supports because of the chosen environment in which it operates. For example, it takes a lot more to display a visible image in a well-lit office than it does in a darkened theater.
If you are able to completely control the lighting in the room where the projection system is used, producing little to no ambient light, a projector producing as little as 1,300 lumens is adequate in a home theater environment, while you would need one producing around 2,500 lumens in the office. However, if you can only get rid of most of the ambient light, such as by closing blinds and dimming overhead lights, the system should be able to produce 1,500 to 3,500 lumens in the home theater and 3,000 to 4,500 lumens in the office. If you have no control over a very well-lit area, you'll need 4,000 to 4,500 lumens in the home theater and 5,000 to 6,000 lumens in the business setting.
By way of comparison, a 60W standard light bulb produces about 800 lumens. Output is not linear, however, because a 100W light bulb produces over double, at 1,700 lm. Nevertheless, you couldn't get away with using a standard 100W incandescent bulb in a projector. The color production is not pure enough and constantly changes throughout its operation due to deposits of soot from the burning of its tungsten filament during the production of light. High-intensity discharge (HID) lamps, like the ones found in projection systems, do more with less by using a smaller electrical discharge to produce far more visible light. Expect to pay considerably more for projector bulbs than for standard bulbs of a comparable wattage.
Years ago, owing to the continued growth of the Internet, video camera–only devices known as webcams started their climb in popularity. Today, with the prevalence of working from home and services like Zoom and Google Meet, it seems that everyone has been introduced to webcams.
Webcams make great security devices as well. Users can keep an eye on loved ones or property from anywhere that Internet access is available. Care must be taken, however, because the security that the webcam is intended to provide can backfire on the user if the webcam is not set up properly. Anyone who happens upon the web interface for the device can control its actions if there is no authentication enabled. Some webcams provide a light that illuminates when someone activates the camera. Nevertheless, it is possible to decouple the camera's operation and that of its light.
Nearly every laptop produced today has a webcam built into its bezel. An example is shown in Figure 3.8—this one has a light and two microphones built in next to it. If a system doesn't have a built-in camera, a webcam connects directly to the computer through an I/O interface, typically USB. Webcams that have built-in wired and wireless NIC interfaces for direct network attachment are prevalent as well. A webcam does not have any self-contained recording mechanism. Its sole purpose is to transfer its captured video directly to the host computer, usually for further transfer over the Internet—hence, the term web.
FIGURE 3.8 An integrated webcam
Audio devices, true to their name, produce sound by plugging into a sound card. Many sound cards today are integrated into a device's motherboard, but some computers still have separate audio expansion cards. Audio devices can provide output, such as through speakers or headphones, or input with a microphone.
Speakers and headphones generally connect with a 1/8" (3.5 mm) audio connector, as shown in Figure 3.9. Most audio connectors have two thin black bands engraved on them, which separates the connector into three parts: the tip, ring, and sleeve. Because of this, sometimes you will see these connectors referred to as TRS connectors. The tip provides left audio, the first band above the black groove (the ring) provides right audio, and the sleeve is the ground. You'll notice that the connector in Figure 3.9 has three black bands, providing four connections and making it a TRRS connector. The fourth one is for the microphone.
FIGURE 3.9 1/8" audio connector
Headsets that provide audio and a microphone are popular for audio conferencing calls and video gaming. A sample headset is shown in Figure 3.10. This model connects via USB, as do most headsets. Volume controls and a microphone mute are located on the right earpiece.
FIGURE 3.10 A USB headset
Although discussed throughout this chapter, the microphone has yet to be formally defined, a definition that is at once technical and simple. Microphones convert sound waves into varying electrical signals. The result can be recorded, transmitted, or altered in a variety of ways, including amplification.
When installing a microphone, you must match its connector with an available one on the computer. Modern choices include the classic analog pink TRS connector and USB. Wireless versions also exist, but their receiver might still be connected to a standard I/O port. Alternatively, the microphone could be paired with a built-in Bluetooth transceiver, headphones, or headset.
Configuring a microphone on a PC is most often performed through the Recording tab of the Sound applet in Control Panel. Options include setting the levels and choosing enhancements, such as noise suppression and echo cancellation. Specialized applications may also have internal configuration for the microphone, passing most details of the configuration back to the operating system.
An input device is one that transfers information from outside the computer system to an internal storage location, such as system RAM, video RAM, flash memory, or disk storage. Without input devices, computers would be unable to change from their default boot-up state. An output device does the opposite of an input device—it takes information that's stored in RAM or another location and spits it back out somewhere for the user to do something with it. We've already covered monitors, which are the most common output device. The other major type of output device is a printer. Chapter 4, “Printers and Multifunction Devices,” is dedicated to them. Further, some devices are capable of managing both input and output.
The keyboard is easily the most popular input device, so much so that it's more of a necessity. Very few users would even think of beginning a computing session without a working keyboard. Fewer still would even know how. The U.S. English keyboard places keys in the same orientation as the QWERTY typewriter keyboards, which were developed in the 1860s. Wired keyboards are almost always attached via USB. Wireless keyboards will often have a USB dongle that is attached to the computer, but they can also use Bluetooth.
Keyboards have also added separate number pads to the side and function keys (not to be confused with the common laptop key labeled Fn), placed in a row across the top of the keyboard above the numerical row. Key functionality can be modified by using one or more combinations of the Ctrl, Alt, Shift, and laptop Fn keys along with the normal QWERTY keys.
Technically speaking, the keys on a keyboard complete individual circuits when each one is pressed. The completion of each circuit leads to a unique scan code that is sent to the keyboard connector on the computer system. The computer uses a keyboard controller chip or function to interpret the code as the corresponding key sequence. The computer then decides what action to take based on the key sequence and what it means to the computer and the active application, including simply displaying the character printed on the key.
In addition to the layout for a standard keyboard, other keyboard layouts exist—some not nearly as popular. For example, without changing the order of the keys, an ergonomic keyboard is designed to feel more comfortable to users as they type. The typical human's hands do not rest with the fingers straight down. Ergonomic keyboards, therefore, should not place keys flat and along the same plane. To accomplish that goal, manufacturers split the keyboard down the middle, angling keys on each side downward from the center. Doing so fits the keys to the fingers of the hands when they are in a relaxed state. Figure 3.11 shows an example of an ergonomic keyboard. Even more exotic-looking ergonomic keyboards exist and may provide better relief for users who suffer from repetitive-use problems such as carpal tunnel.
FIGURE 3.11 An ergonomic keyboard
Although the computer mouse was born in the 1970s at Xerox's Palo Alto Research Center (PARC), it was in 1984 that Apple made the mouse an integral part of the personal computer with the introduction of the Macintosh. In its most basic form, the mouse is a hand-fitting device that uses some form of motion-detection mechanism to translate its own physical two-dimensional movement into onscreen cursor motion. Many variations of the mouse exist, including trackballs, tablets, touch pads, and pointing sticks. Figure 3.12 illustrates the most recognizable form of the mouse.
FIGURE 3.12 A computer mouse
The motion-detection mechanism of the original Apple mouse was a simple ball that protruded from the bottom of the device so that when the bottom was placed against a flat surface that offered a slight amount of friction, the mouse would glide over the surface but the ball would roll, actuating two rollers that mapped the linear movement to a Cartesian plane and transmitted the results to the software interface. This method of motion detection has been replaced by optical receptors to catch LED light reflected from the surface the mouse is used on. Note that most optical mice will have problems working on a transparent glass surface because of the lack of reflectivity.
The mouse today can be wired to the computer system or connected wirelessly. A wired mouse typically uses a USB port, which also provides power. Wireless versions will have a USB dongle or connect via Bluetooth. They are powered with batteries, and the optical varieties deplete these batteries more quickly than their mechanical counterparts.
The final topic is one that is relevant for any mouse: buttons. The number of buttons that you need your mouse to have depends on the software interfaces you use. For the Macintosh, one button has always been sufficient, but for a Windows-based computer, at least two are recommended—hence, the term right-click. Today, the mouse is commonly found to have a wheel on top to aid in scrolling and other specialty movement. The wheel has even developed a click in many models, sort of an additional button underneath the wheel. Buttons on the side of the mouse that can be programmed for whatever the user desires are common today as well.
There are several variants on pointer devices, such as trackballs. A trackball is like an inverted mouse. Both devices place the buttons on the top, which is where your fingers will be. A mouse places its tracking mechanism on the bottom, requiring that you move the entire assembly as an analogue for how you want the cursor on the screen to move. In contrast, a trackball places the tracking mechanism, usually a ball that is about one inch in diameter, on the top with the buttons. You then have a device that need not be moved around on the desktop and can work in tight spaces and on surfaces that would be incompatible with the use of a mouse. The better trackballs place the ball and buttons in such a configuration that your hand rests ergonomically on the device, allowing effortless control of the onscreen cursor.
We spent quite a lot of time in Chapter 2, “Expansion Cards, Storage Devices, and Power Supplies,” discussing storage options, such as hard drives and optical drives. These devices are frequently internal to the case, but external options are available as well.
Take optical drives, for instance. In order to save space on laptops, manufacturers usually don't include internal optical drives. If users want to play a Blu-ray or DVD movie, they will need to attach an external optical drive. External optical drives can be used for data backups as well. These external drives will most likely connect via USB or eSATA.
External storage drives can greatly enhance the storage capacity of a computer, or they can provide networked storage for several users. A plethora of options is available, from single drives to multi-drive systems with several terabytes of capacity. Figure 3.13 shows an external network-attached storage (NAS) device.
FIGURE 3.13 A network-attached storage device
“NETGEAR ReadyNAS NV+” by PJ - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons
Looking at Figure 3.13, you can see that this is a self-enclosed unit that can hold up to four hard drives. Some hold more; some hold fewer. Nicer NAS systems enable you to hot-swap hard drives and have built-in fault tolerance as well.
In addition to the hardware, the NAS device contains its own operating system, meaning that it acts like its own file server. In most cases, you can plug it in, do some very minor configuration, and have instant storage space on your network. As far as connectivity goes, NAS systems will connect to a PC through a USB or eSATA port, but that is primarily so you can use that PC to run the configuration software for the NAS. The NAS also connects to the network, and that is how all network users access the storage space.
Peripheral devices used with a computer need to attach to the motherboard somehow. They do so through the use of ports and cables. A port is a generic name for any connector on a computer or peripheral into which a cable can be plugged. A cable is simply a way of connecting a peripheral or other device to a computer using multiple copper or fiber-optic conductors inside a common wrapping or sheath. Typically, cables connect two ports: one on the computer and one on some other device.
The A+ exam objectives break cables and connectors into two different subobjectives, but really they need to be discussed together. After all, a cable without a connector doesn't do much good, and neither does a connector without a cable. In the following sections, we'll look at three different classifications of cables and the connectors that go with them: peripheral, video, and hard drive.
Some cables are for specific types of devices only. For example, HDMI can transmit audio as well as video, and SCSI supports more than just hard drives. For the most part, though, we associate HDMI with video and SCSI with storage devices.
Unlike HDMI and SCSI, the cables and connectors in this section are specifically designed to connect a variety of devices. For example, someone may have a USB hub with a wireless mouse, network card, Lightning cable (to charge an iPhone), and flash drive all attached at the same time. Those four devices serve very different purposes, but they all share the USB connection in common. We'll start with the highly popular USB and then discuss Lightning ports, Thunderbolt cables, and serial cables.
Universal Serial Bus (USB) cables are used to connect a wide variety of peripherals, such as keyboards, mice, digital cameras, printers, scanners, hard drives, and network cards, to computers. USB was designed by several companies, including Intel, Microsoft, and IBM, and is currently maintained by the USB Implementers Forum (USB-IF).
USB technology is fairly straightforward. Essentially, it is designed to be Plug and Play—just plug in the peripheral and it should work, provided that the software is installed to support it. Many standard devices have drivers that are built into the common operating systems or automatically downloaded during installation. More complex devices come with drivers to be installed before the component is connected.
USB host controllers can support up to 127 devices, which is accomplished through the use of a 7-bit identifier. The 128th identifier, the highest address, is used for broadcasting to all endpoints. Realistically speaking, you'll probably never get close to this maximum. Even if you wanted to try, you won't find any computers with 127 ports. Instead, you would plug in a device known as a USB hub (shown in Figure 3.14) into one of your computer's USB ports, which will give you several more USB ports from one original port. Understand that a hub counts as a device for addressing purposes. Hubs can be connected to each other, but interconnection of host controllers is not allowed; each one and its connected devices are isolated from other host controllers and their devices. As a result, USB ports are not considered networkable ports. Consult your system's documentation to find out if your USB ports operate on the same host controller.
FIGURE 3.14 A 4-port USB hub
Another nice feature of USB is that devices can draw their power from the USB cable, so you may not need to plug in a separate power cord. This isn't universally true, though, as some peripherals still require external power.
Even though USB was released in 1996, the first widely used standard was USB 1.1, which was released in 1998. It was pretty slow—only 12 Mbps at full speed and 1.5 Mbps at low speed—so it was only used for keyboards, mice, and printers. When USB 2.0 came out in 2000 with a faster transfer rate of 480 MBps (called Hi-Speed), video devices were possible. The newer USB 3.x and USB4 standards have increased throughput even further. Table 3.1 lays out the specifications and speeds for you.
Specification | Release year | Maximum speed | Trade name | Color |
---|---|---|---|---|
USB 1.1 | 1998 | 12 Mbps | Full-Speed | White |
USB 2.0 | 2000 | 480 Mbps | Hi-Speed | Black |
USB 3.0 | 2008 | 5 Gbps | SuperSpeed | Blue |
USB 3.1 | 2013 | 10 Gbps | SuperSpeed+ | Teal |
USB 3.2 | 2017 | 20 Gbps | SuperSpeed+ | Red |
USB4 | 2019 | 40 Gbps | USB4 40 Gbps | n/a |
TABLE 3.1 USB specifications
The USB 1.x and 2.x specifications didn't recommend a specific color for the ports, but when USB 3.0 was released, the USB Implementers Forum suggested that the ports and cable connectors be colored blue, to signify that they were capable of handling higher speeds. Device manufacturers are not required to follow the color-coding scheme, so you may see some inconsistency. A yellow USB port is “always on,” meaning it's capable of charging a connected device even if the PC is sleeping or shut down.
USB4 is the newest standard, and it's based on Thunderbolt 3 specifications. Other features of USB4 include:
As mentioned previously, USB ports provide power to devices plugged into them. Typical power for attached USB devices is 5V. The maximum current (amps) and wattage will depend on the connected device and USB standard being used.
All USB ports are also capable of functioning as charging ports for devices such as tablets, smartphones, and smart watches. The charging standard, called USB Battery Charging, was released in 2007. USB Power Delivery (PD) was developed in 2012. Technically, they are different standards, but in practice, USB ports are capable of supporting both standards at the same time. Table 3.2 outlines some of the versions and the maximum power that they provide. The newest version, USB PD 3.1, requires the use of a USB-C cable.
Standard | Year | Maximum power |
---|---|---|
USB Battery Charging 1.0 | 2007 | 5V, 1.5A (7.5W) |
USB Battery Charging 1.2 | 2010 | 5V, 5A (20W) |
USB Power Delivery 1.0 | 2012 | 20V, 5A (100W) |
USB Power Delivery 2.0 (specified use of Type-C connectors but only up to 15W) | 2014 | 5V, 3A (15W) |
USB Power Delivery 3.0 | 2015 | 20V, 5A (100W) |
USB Power Delivery 3.1 | 2021 | 48V, 5A (240W) |
TABLE 3.2 USB power standards
A smartphone or tablet typically needs a minimum of about 7.5 watts to charge properly. The Battery Charging 1.0 standard was good enough, but not for larger devices. For example, about 20 watts is required to power a small laptop computer, and standard 15-inch laptops can require 60 watts or more. With USB PD, one USB port can now provide enough power for a laptop as well as a small printer.
Because of the capabilities of USB PD, it's becoming common to see devices up to laptop size lose their standard AC power ports and adapters—they may just have a USB-C port instead. To get the full capabilities of USB PD, you need to use a USB-C port and cable.
In order to achieve the full speed of the specification that a device supports, the USB cable needs to meet that specification as well. In other words, USB 1.x cables cannot provide USB 2.0 and 3.x performance, and USB 2.0 cables cannot provide USB 3.x performance. Otherwise, the connected device will have to fall back to the maximum version supported by the cable. This is usually not an issue, except for the lost performance, but some high-performance devices will refuse to operate at reduced levels. Note that all specifications are capable of Low Speed, which is a 1.5 Mbps performance standard that has existed since the beginning of USB time.
Throughout most of its history, USB has relied on a small suite of standard connectors. The two broad classifications of connectors are designated Type-A and Type-B connectors, and there are micro and mini versions of each. A standard USB cable has some form of Type-A connector on the end that plugs into the computer or hub, and some form of Type-B or proprietary connector on the device end. Figure 3.15 shows five classic USB 1.x/2.0 cable connectors. From left to right, they are as follows:
FIGURE 3.15 Standard USB connectors
By Techtonic (edited from USB types.jpg) [Public domain], via Wikimedia Commons
Small form factor devices, including many smartphones and smaller digital cameras, use a micro-USB or mini-USB connector, unless the manufacturer has developed its own proprietary connector. Micro-USB connectors (and modified ones) are popular with many Android phone manufacturers.
In 2014, a new connector named USB Type-C (or simply USB-C) was developed. USB-C is designed to replace Type-A and Type-B, and, unlike its predecessors, it's reversible. That means no more flipping the connector over several times to figure out which way it connects. Type-C cables will also be able to provide more power to devices than classic cables were. Figure 3.16 shows a Type-C connector and a Type-A connector. You can see that while the Type-A connector is rectangular-shaped, the Type-C connector has rounded corners and looks more like an elongated oval.
FIGURE 3.16 USB Type-C (top) and Type-A (bottom)
USB was designed to be a short-distance technology. Because of this, USB cables are limited in length. USB 1.x and 2.0 can use cables up to 5 meters long, whereas USB 3.x can use cables up to 3 meters long. The maximum length of a USB4 cable is even shorter yet at 80 centimeters (0.8 meters). In addition, if you use hubs, you should never use more than five hubs between the system and any component.
Despite the seemingly locked-up logic of USB connectivity, it is occasionally necessary to alter the interface type at one end of a USB cable. For that reason, there are a variety of simple, passive converters on the market with a USB interface on one side and a USB or different interface on the other. Along with adapters that convert USB Type-A to USB Type-B, there are adapters that will convert a male connector to a female one. In addition, you can convert USB to a lot of other connector types, such as USB to Ethernet (shown in Figure 3.17), USB to SATA, USB to eSATA, USB to PS/2, USB to serial, and a variety of others.
FIGURE 3.17 Kensington USB to Ethernet adapter
Introduced in 2012 with the iPhone 5, the Lightning connector is Apple's proprietary connector for iPhones and iPads. It's an 8-pin connector that replaced Apple's previous 30-pin dock connector. A standard Lightning cable has a USB Type-A connector on one end and the Lightning connector on the other, as shown in Figure 3.18. It's not keyed, meaning that you can put it in with either edge up.
FIGURE 3.18 Lightning cable
Lightning cables support USB 2.0. You will find cables that are USB-C to Lightning, as well as various Lightning adapters, such as those to HDMI, DisplayPort, audio, and Lightning to female USB Type-A (so you can plug a USB device into an iPad or iPhone).
There are rumors that Apple may do away with the Lightning connector in a future iPhone release and instead use USB-C. After all, Apple has added USB-C ports to laptops and iPads, and USB-C is the port of the future. The same rumors have persisted since the iPhone 8 was released in 2017, and it seems that Apple has little reason to move away from its proprietary connector.
Where there's lightning, there's thunder, right? Bad joke attempts aside, in computer circles Lightning connectors don't have anything to do with Thunder(bolt). Thunderbolt, created in collaboration between Intel and Apple and released in 2011, combines PCI Express 2.0 x4 with the DisplayPort 1.x technology. While it's primarily used for video (to replace DisplayPort), the connection itself can support multiple types of peripherals, much like USB does.
For most of their histories, Thunderbolt and USB have been competing standards. Thunderbolt was designed more for video applications and USB was the slower “jack of all trades” port, but in reality, they could be used for almost the exact same list of peripherals. It just depended on what your computer supported. But as we pointed out in the USB section, the new USB4 version is based on Thunderbolt 3, providing the same speed and using the same connectors. Table 3.3 shows the four Thunderbolt versions and some key characteristics.
Version | Year | Maximum throughput | Connector | Other new features |
---|---|---|---|---|
Thunderbolt 1 | 2011 | 10 gbps | Mini DisplayPort | |
Thunderbolt 2 | 2013 | 20 gbps | Mini DisplayPort | DisplayPort 1.2 (can send video to a 4k display) |
Thunderbolt 3 | 2015 | 40 gbps | USB-C | 10 gbps Ethernet support |
Thunderbolt 4 | 2020 | 40 gbps | USB-C | Can support two 4k displays or one 8k display; 32 gbps PCIe |
TABLE 3.3 Thunderbolt standards
Thunderbolt 3 was released in 2015 and doubled the bandwidth to 40 Gbps. It supports PCIe 3.0 and DisplayPort 1.2, meaning that it can support dual 4K displays at 60 Hz or a single 4K display at 120 Hz. It also provides up to 100 watts of power to a device.
Thunderbolt 4 is the current standard, released in 2020. Perhaps the most interesting thing about the new release is what it doesn't do, which is increase data transfer rates versus Thunderbolt 3. It still has a maximum bandwidth of 40 Gbps. And the maximum of 100 watts of power to attached devices didn't change either. The big advantages Thunderbolt 4 has include support for two 4k displays or one 8k display and the requirement to support 32 Gbps data transfers via PCIe, up from 16 Gbps in version 3.
The most common Thunderbolt cable is a copper, powered active cable extending as far as 3 meters, which was designed to be less expensive than an active version of a DisplayPort cable of the same length. There are also optical cables in the specification that can reach as far as 60 meters. Copper cables can provide power to attached devices, but optical cables can't.
Additionally, and as is the case with USB, Thunderbolt devices can be daisy-chained and connected via hubs. Daisy chains can extend six levels deep for each controller interface, and each interface can optionally drive a separate monitor, which should be placed alone on the controller's interface or at the end of a chain of components attached to the interface.
As noted in Table 3.3, Thunderbolt changed connectors between versions 2 and 3. Figure 3.19 shows two Thunderbolt 2 interfaces next to a USB port on an Apple MacBook Pro. Note the standard lightning-bolt insignia by the port. Despite its diminutive size, the Thunderbolt port has 20 pins around its connector bar, like its larger DisplayPort cousin. Of course, the functions of all the pins do not directly correspond between the two interface types, because Thunderbolt adds PCIe functionality.
FIGURE 3.19 Two Thunderbolt 2 interfaces
Starting with Thunderbolt 3, the connector was changed to standard USB-C connectors, as shown in Figure 3.20. Notice that the lightning bolt icon remained the same.
FIGURE 3.20 Two Thunderbolt 3 interfaces
By Amin - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=67330543
Converters are available that connect Thunderbolt connectors to VGA, HDMI, and DVI monitors. Active converters that contain chips to perform the conversion are necessary in situations such as when the technology is not directly pin-compatible with Thunderbolt—as with VGA and DVI-A analog monitor inputs, for example. Active converters are only slightly more expensive than their passive counterparts but still only a fraction of the cost of Thunderbolt hubs. One other advantage of active connectors is that they can support resolutions of 4k (3840 × 2160) and higher.
Before USB came along in 1998, serial ports were considered slow and inferior to parallel ports. Still, serial enjoyed use among peripherals that didn't need to transfer information at high speeds, such as mice, modems, network management devices, and even printers. Figure 3.21 shows a 9-pin serial port. It's the one marked “Serial,” and it's also the only male connector on the back of the PC.
FIGURE 3.21 Several peripheral ports
As you might expect, a serial cable attaches to the serial port. Figure 3.22 shows a female DB-9 serial connector. To make things more confusing, sometimes you will hear people refer to the image in Figure 3.22 as an RS-232 cable or connector. Even though the terms are often used interchangeably, there is a technical difference.
FIGURE 3.22 DB-9 serial connector
DB-9 refers to a specific type of D-sub connector that has 9 pins. RS-232, on the other hand, is a communications standard for serial transmission. In other words, systems may communicate with each other using RS-232 over a DB-9 connection. But RS-232 can be used on other types of serial cables as well, such as DB-15 or DB-25. Generally speaking, if someone asks for an RS-232 serial cable, they mean a DB-9 cable with female connectors. But it's always best to confirm.
RS-232 did have a few advantages over USB—namely, longer cable length (15 meters vs. 3–5 meters) and a better resistance to electromagnetic interference (EMI). Still, USB has made old-school serial ports nearly obsolete. About the only time they are used today is for management devices that connect to servers or network routers with no keyboard and monitor installed.
Computer displays are ubiquitous—they're easily the most widely used peripheral. Different standards exist to connect displays to the computer, and you need to be familiar with five of them for the exam: VGA, DVI (and variants), HDMI, and DisplayPort. We will start with the older technologies and work toward the present.
The Video Graphics Array (VGA) connector was the de facto video standard for computers for years and is still in use today. First introduced in 1987 by IBM, it was quickly adopted by other PC manufacturers. The term VGA is often used interchangeably to refer to generic analog video, the 15-pin video connector, or a 640 × 480 screen resolution (even though the VGA standard can support much higher resolutions). Figure 3.23 shows a VGA port, as well as the male connector that plugs into the port. Nearly all VGA connectors are blue.
FIGURE 3.23 VGA connector and port
VGA technology is the only one on the objectives list that is purely analog. It has been superseded by newer digital standards, such as DVI, HDMI, and DisplayPort, and it was supposed to be phased out starting in 2013. A technology this widely used will be around for quite a while, though, and you'll still see it occasionally in the wild (or still in use).
The analog VGA standard ruled the roost for well over a decade but it had a lot of shortcomings. Digital video can be transmitted farther and at higher quality than analog, so development of digital video standards kicked off in earnest. The first commercially available one was a series of connectors known collectively as Digital Visual Interface (DVI) and was released in 1999.
At first glance, the DVI connector might look like a standard D-sub connector. On closer inspection, however, it begins to look somewhat different. For one thing, it has quite a few pins, and for another, the pins it has are asymmetrical in their placement on the connector. The DVI connector is usually white and about an inch long. Figure 3.24 shows what the connector looks like coming from the monitor.
FIGURE 3.24 DVI connector
There are three main categories of DVI connectors:
The DVI-D and DVI-I connectors come in two varieties: single-link and dual-link. The dual-link options have more conductors—taking into account the six center conductors—than their single-link counterparts; therefore, the dual-link connectors accommodate higher speed and signal quality. The additional link can be used to increase screen resolution for devices that support it. Figure 3.25 illustrates the five types of connectors that the DVI standard specifies.
FIGURE 3.25 Types of DVI connectors
DVI-A and DVI-I analog quality is superior to that of VGA, but it's still analog, meaning that it is more susceptible to noise. However, the DVI analog signal will travel farther than the VGA signal before degrading beyond usability. Nevertheless, the DVI-A and VGA interfaces are pin-compatible, meaning that a simple passive DVI-to-VGA adapter, as shown in Figure 3.26, is all that is necessary to convert between the two. As you can see, the analog portion of the connector, if it exists, comprises the four separate color and sync pins and the horizontal blade that they surround, which happens to be the analog ground lead that acts as a ground and physical support mechanism even for DVI-D connectors.
FIGURE 3.26 DVI-to-VGA adapter
It's important to note that DVI-I cables and interfaces are designed to interconnect two analog or two digital devices; they cannot convert between analog and digital. DVI cables must support a signal of at least 4.5 meters, but better cable assemblies, stronger transmitters, and active boosters result in signals extending over longer distances.
One thing to note about analog versus digital display technologies is that all graphics adapters and all monitors deal with digital information. It is only the connectors and cabling that can be made to support analog transmission. Before DVI and HDMI encoding technologies were developed, consumer digital video display connectors could not afford the space to accommodate the number of pins that would have been required to transmit 16 or more bits of color information per pixel. For this reason, the relatively few conductors of the inferior analog signaling in VGA were appealing.
High-Definition Multimedia Interface (HDMI) is an all-digital technology that advances the work of DVI to include the same dual-link resolutions using a standard HDMI cable but with higher motion-picture frame rates and digital audio right on the same connector. HDMI was introduced in 2002, which makes it seem kind of old in technology years, but it's a great, fast, reliable connector that will probably be around for several years to come. HDMI cabling also supports an optional Consumer Electronics Control (CEC) feature that allows transmission of signals from a remote-control unit to control multiple devices without separate cabling to carry infrared signals.
HDMI cables, known as Standard and High Speed, exist today in the consumer space. Standard cables are rated for 720p resolution as well as 1080i, but not 1080p. High Speed cables are capable of supporting not only 1080p, but also the newer 4k and 8k technologies. Figure 3.27 shows an HDMI cable and port.
FIGURE 3.27 HDMI cable and port
In June 2006, revision 1.3 of the HDMI specification was released to support the bit rates necessary for HD DVD and Blu-ray Disc. This version also introduced support for “deep color,” or color depths of at least one billion colors, including 30-, 36-, and 48-bit color. However, not until version 1.4, which was released in May 2009, was the High Speed HDMI cable initially required.
With version 1.4 came HDMI capability for the controlling system—the television, for instance—to relay Ethernet frames between its connected components and the Internet, alleviating the need for each and every component to find its own access to the LAN for Internet access. Both Standard and High Speed cables are available with this Ethernet channel. Each device connected by such a cable must also support the HDMI Ethernet Channel specification, however.
Additional advances that were first seen in version 1.4 were 3D support, 4K resolution (but only at a 30 Hz refresh rate), an increased 120 Hz refresh rate for the 1080 resolutions, and an Audio Return Channel (ARC) for televisions with built-in tuners to send audio back to an A/V receiver without using a separate output cable. Version 1.4 also introduced the anti-vibration Type-E locking connector for the automotive-video industry and cables that can also withstand vibration as well as the hot/cold extremes that are common in the automotive world.
Version 2.0 of HDMI (2013) introduced no new cable requirements. In other words, the existing High Speed HDMI cable is fully capable of supporting all new version 2 enhancements. These enhancements include increasing the 4K refresh rate to 60 Hz, a 21:9 theatrical widescreen aspect ratio, and 32-channel audio. Note that 7.1 surround sound comprises only eight channels, supporting the more lifelike Rec. 2020 color space and multiple video and audio streams to the same output device for multiple users. Version 2.0a, released in 2015, primarily added high dynamic range (HDR) video, but it does not require any new cables or connectors.
The most recent version (as of this writing) is HDMI 2.1, released in November 2017. Version 2.1 specifies a new cable type called 48G, which provides for 48 Gbps bandwidth. 48G cables are backward compatible with older HDMI versions. You can also use older cables with 48G-capable devices, but you just won't get the full 48 Gbps bandwidth. HDMI 2.1 also provides for 120 Hz refresh rates for 4k, 8k, and 10k video, and it supports enhanced Audio Return Channel (eARC), which is needed for object-based audio formats, such as DTS:X and Dolby Atmos.
Even though the HDMI connector is not the same as the one used for DVI, the two technologies are electrically compatible. HDMI is compatible with DVI-D and DVI-I interfaces through proper adapters, but HDMI's audio and remote-control pass-through features are lost. Additionally, 3D video sources work only with HDMI. Figure 3.28 shows a DVI-to-HDMI adapter between DVI-D and the Type-A 19-pin HDMI interface. Compare the DVI-D interface in Figure 3.28 to the DVI-I interface in Figure 3.26, and note that the ground blade on the DVI-D connector is narrower than that of the DVI-A and DVI-I connectors. The DVI-D receptacle does not accept the other two plugs for this reason, as well as because the four analog pins around the blade have no sockets in the DVI-D receptacle.
FIGURE 3.28 DVI-to-HDMI adapter
Unlike DVI-D—and, by extension DVI-I—DVI-A and VGA devices cannot be driven passively by HDMI ports directly. An HDMI-to-VGA adapter must be active in nature, powered either externally or through the HDMI interface itself.
HDMI cables should meet the signal requirements of the latest specification. As a result, and as with DVI, the maximum cable length is somewhat variable. For HDMI, cable length depends heavily on the materials used to construct the cable. Passive cables tend to extend no farther than 15 meters, while adding electronics within the cable to create an active version results in lengths as long as 30 meters.
DisplayPort is a royalty-free digital display interface from the Video Electronics Standards Association (VESA) that uses less power than other digital interfaces and VGA. Introduced in 2008, it's designed to replace VGA and DVI. To help ease the transition, it's backward compatible with both standards, using an adapter. In addition, an adapter allows HDMI and DVI voltages to be lowered to those required by DisplayPort because it is functionally similar to HDMI and DVI. DisplayPort cables can extend 3 meters, unless an active cable powers the run, in which case the cable can extend to 33 meters. DisplayPort is intended primarily for video, but, like HDMI, it can transmit audio and video simultaneously.
Figure 3.30 shows a DisplayPort port on a laptop as well as a connector. The DisplayPort connector latches itself to the receptacle with two tiny hooks. A push-button mechanism serves to release the hooks for removal of the connector from the receptacle. Note the beveled keying at the bottom-left corner of the port.
FIGURE 3.30 A DisplayPort port and cable
The DisplayPort standard also specifies a smaller connector, known as the Mini DisplayPort (MDP) connector. The MDP is electrically equivalent to the full-size DP connector and features a beveled keying structure, but it lacks the latching mechanism present in the DP connector. The MDP connector looks identical to a Thunderbolt 2 connector, which we covered in the “Peripheral Cables and Connectors” section earlier in this chapter.
At the beginning of this chapter, we said that we were going to move outside the box and talk about external peripherals, cables, and connectors. For the most part that's true, but here we need to take a small digression to talk about connecting hard drives, most of which are internal. Some of this you already learned in Chapter 2, so this could feel like a review. Of course, there are SATA and PATA connectors, but we'll also throw in two new ones—SCSI and eSATA.
Remember that all drives need some form of connection to the motherboard so that the computer can “talk” to the disk drive. Regardless of whether the connection is built into the motherboard (onboard) or on an adapter card (off-board), internal or external, the standard for the attachment is based on the drive's requirements. These connections are known as drive interfaces. The interfaces consist of circuitry and a port, or header.
The most common hard drive connector used today is Serial Advanced Technology Attachment (SATA). Figure 3.31 shows SATA headers, which you have seen before, and a SATA cable. Note that the SATA cable is flat, and the connector is keyed to fit into the motherboard header in only one way. SATA data cables have a 7-pin connector. SATA power cables have 15 pins and are wider than the data connector.
FIGURE 3.31 SATA connectors and cable
The SATA we've discussed so far is internal, but there's an external version as well, appropriately named external SATA (eSATA). It uses the same technology, only in an external connection. The port at the bottom center of Figure 3.32 is eSATA. It entered the market in 2003, is mostly intended for hard drive use, and can support up to 15 devices on a single bus.
FIGURE 3.32 eSATA
Table 3.4 shows some of the eSATA specifications.
Version | Year | Speed | Names |
---|---|---|---|
Revision 1.0 | 2003 | 1.5 Gbps | SATA I, SATA 1.5 Gb/s |
Revision 2.0 | 2005 | 3.0 Gbps | SATA II, SATA 3Gb/s |
Revision 3.0 | 2009 | 6.0 Gbps | SATA III, SATA 6Gb/s |
TABLE 3.4 eSATA specifications
You will commonly see the third generation of eSATA (and SATA) referred to as SATA 6 or SATA 6 Gb/s. This is because if they called it SATA 3, there would be confusion with the second generation, which had transfer speeds of 3.0 Gbps.
An interesting fact about eSATA is that the interface does not provide power, which is a big negative compared to its contemporary high-speed serial counterparts. To overcome this limitation, there is another eSATA port that you might see, called Power over eSATA, eSATA+, eSATAp, or eSATA/USB. It's essentially a combination eSATA and USB port. Since the port is a combination of two others, neither sanctioning body officially recognizes it (which is probably why there are so many names—other companies call it what they want to). Figure 3.33 shows this port.
FIGURE 3.33 USB over eSATA
You can see that this port is slightly different from the one in Figure 3.32, and it's also marked with a USB icon next to the eSATA one. On the market, you can purchase cables that go from this port to an eSATA device and provide it with power via the eSATAp port.
Prior to SATA, the most popular hard drive connector was Integrated Drive Electronics (IDE), which has now been renamed Parallel Advanced Technology Attachment (PATA). There is no difference between PATA and IDE, other than the name. Figure 3.34 shows PATA connectors on a motherboard next to a PATA cable. Refer back to Chapter 2, Figure 2.9, to see a direct comparison of SATA and PATA connectors on a hard drive.
FIGURE 3.34 PATA connectors and cable
PATA drives use a 40-pin flat data cable, and there are a few things to note about it. First, there is an off-colored stripe (often red, pink, or blue) along one edge of the cable to designate where pin 1 is. On a PATA drive, pin 1 is always on the edge nearest the power connector. The second thing to note is that there are three connectors—one for the motherboard and two for drives. PATA technology specifies that there can be two drives per cable, in a primary and secondary configuration. The primary drive will be attached to the other end of the cable, and the secondary, if connected, will use the middle connector. In addition, the drive itself may need to be configured for primary or secondary by using the jumper block on the drive. Most PATA drives will auto-configure their status based on their position on the cable, but if there is a conflict, they can be manually configured.
Power is supplied by a 4-pin power connector known as a Molex connector, shown in Figure 3.35. If you have a PATA drive and a SATA-supporting power supply (or vice versa), you can buy an adapter to convert the power to what you need. The same holds true for data connectors as well.
FIGURE 3.35 Molex power connector
A fourth type of hard drive connector is called Small Computer System Interface (SCSI). The acronym is pronounced “scuzzy,” even though the original designer intended for it to be called “sexy.” The most common usage is for storage devices, but the SCSI standard can be used for other peripherals as well. You won't see many SCSI interfaces in home computers—it's more often found in servers, dedicated storage solutions, and high-end workstations.
Early versions of SCSI used a parallel bus interface called SCSI Parallel Interface (SPI). Starting in 2005, SPI was replaced by Serial Attached SCSI (SAS), which, as you may guess, is a serial bus. If you compare SCSI to other popular drive interfaces at the time, SCSI was generally faster but more expensive than its counterparts, such as IDE.
Although it's essentially obsolete now, you might find some details of SPI interesting. The first standard, ratified in 1986, was an 8-bit bus that provided for data transfers of 5 Mbps. Because it was an 8-bit bus, it could support up to seven devices. (The motherboard or expansion card header was the eighth.) Each device needed a unique ID from 0 to 7, and devices were attached in a daisy-chain fashion. A terminator (essentially a big resistor) needed to be attached to the end of the chain; otherwise, the devices wouldn't function.
In 1994, the 8-bit version was replaced by a 16-bit version that supported up to 15 devices and had a transfer speed of 320 Mbps. Compared to the 100 Mbps supported by IDE at the time, you can see why people wanted SCSI!
SPI had different connectors, depending on the standard; 50-pin, 68-pin, and 80-pin connectors were commonly used. Figure 3.36 shows two 50-pin Centronics connectors, which were common for many years. Figure 3.37 shows a terminator, with the top cover removed so that you can see the electronics.
FIGURE 3.36 Two 50-pin SCSI connectors
By Smial at German Wikipedia - Own work, CC BY-SA 2.0 de, https://commons.wikimedia.org/w/index.php?curid=1009512
FIGURE 3.37 A SCSI terminator
By Adamantios - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6116837
Of the newer SCSI implementations, the one you will most likely encounter is SAS. For example, as we mentioned in Chapter 2, most 15,000 rpm hard drives are SAS drives. From an architectural standpoint, SAS differs greatly from SPI, starting with the fact that it's serial, not parallel. What they do share is the use of the SCSI command architecture, which is a group of commands that can be sent from the controller to the device to make it do something, such as write or retrieve data.
A SAS system of hard drives works much like the SATA and PATA systems you've already learned about. There's the controller, the drive, and the cable that connects it. SAS uses its own terminology, though, and adds a component called an expander. Here are the four components of a SAS system:
Figure 3.38 shows a SAS cable and connector. It's slightly wider than a SATA power and data connector together. The other end of a cable such as this might have an identical SAS connector or a mini-SAS connector, or it might pigtail into four SATA or mini-SAS connectors.
FIGURE 3.38 A SAS connector
By Adamantios - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6117374
Table 3.5 lists SAS standards and maximum throughput.
Standard | Year | Throughput |
---|---|---|
SAS-1 | 2005 | 3 Gbps |
SAS-2 | 2009 | 6 Gbps |
SAS-3 | 2013 | 12 Gbps |
SAS-4 | 2017 | 22.5 Gbps |
TABLE 3.5 SAS standards and speeds
SAS offers the following advantages over SPI:
With the invention of super-fast M.2 and NVMe hard drives, which you learned about in Chapter 2, it's hard to say what the future of SAS is. For example, SAS-5 (45 Gbps) has been under development since around 2018, but there is no official release date and there seems to be no impetus to get it to market. Most likely, SAS will continue to have a place in corporate environments with large-scale storage solutions, while the others will provide leading-edge speed for the workstation environment, particularly among laptops and smaller devices.
In this chapter, you first learned about peripheral types. We broke them into four categories: video, audio, input/output, and storage. Video peripherals include monitors, projectors, and webcams. There aren't many audio connectors, but most use the TRS connector you learned about. Input and output devices are plentiful, and we concentrated on keyboards and mice. Storage devices and optical drives can be external, and an example is an external network-attached storage (NAS) device.
In the second section of the chapter, you learned about various cable and connection types, and the purposes and uses of peripheral types. First, you learned about peripheral cables and connectors, such as USB, Lightning, Thunderbolt, and serial. Then we moved on to video cables. Topics included the analog VGA standard, as well as the digital standards DVI, HDMI, and DisplayPort. Then, we covered hard drive connections and cables related to SATA, eSATA, IDE (PATA), and SCSI.
Recognize and understand different peripheral connectors and adapters. Expansion cards and motherboards have external connectivity interfaces. The interfaces have connectors that adhere to some sort of standard for interconnecting with a cable or external device. Knowing these specific characteristics can help you differentiate among the capabilities of the interfaces available to you. Understanding when to use an adapter to convert one connector to another is crucial to achieving connectivity among differing interfaces. Adapters you should know are DVI-to-HDMI, USB-to-Ethernet, and DVI-to-VGA.
Recognize and be able to describe display connectors specifically. Although a type of peripheral connector, display connectors are in a class all their own. Types include VGA, HDMI, mini-HDMI, DisplayPort, and the various versions of DVI.
Recognize and understand the purpose of hard drive cables and connectors. The connectors you should recognize are SATA, eSATA, IDE (PATA), and SCSI. Molex connectors are used to power PATA devices. They each connect hard drives or optical drives.
Know the various peripheral cables and their connectors. Multipurpose cables include USB, Lightning, Thunderbolt, and serial. USB has the largest variety of connectors, including USB-A and USB-B and their mini- and micro- versions, as well as the newer USB-C. USB cables also can have proprietary connectors, such as Apple's Lightning connector. Thunderbolt can use a proprietary connector or USB-C. Serial cables have a DB-9 connector.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
Looking at the back of a computer, you see the interfaces shown in the following graphic. Which type of cables do you need to plug into each one?
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Even as technology processes and almost all of our lives seem digitized, our society is still reliant on paper. When we conduct business, we use different types of paper documents, such as contracts, letters, and, of course, money. And because most of those documents are created on computers, printers are inherently important. Even with electronic business being the norm in many situations, you will likely still have daily situations that require an old-fashioned hard copy of something.
Printers are electromechanical output devices that are used to put information from the computer onto paper. They have been around since the introduction of the computer. Other than the display monitor, the printer is the most popular output device purchased for a computer because a lot of people want and sometimes need to have paper copies of the documents they create.
In this chapter, we will discuss the details of each major type of printing technology, including impact printers, inkjet printers, laser printers, and thermal printers. We'll also get into three-dimensional (3D) printers, which are an entirely different output animal and have nothing to do with putting ink on paper. They are so different that it's almost a misnomer to call them printers. Unless we specifically talk about 3D printing, assume we mean the classic two-dimensional kind. Once we cover the different types, we'll talk about installing and configuring printers and finish up with a section on printer maintenance.
Several types of printers are available on the market today. As with all other computer components, there have been significant advancements in printer technology over the years. Most of the time, when faced with the decision of purchasing a printer, you're going to be weighing performance versus cost. Some of the higher-quality technologies, such as color laser printing, are relatively expensive for the home user. Other technologies are less expensive but don't provide the same level of quality.
In the following sections, you will learn about the various types of print technologies that you will see as a technician as well as their basic components and how they function. Specifically, we are going to look at four classifications of classic printing—impact, inkjet, laser, and thermal—and then finish up with a primer on 3D printing.
The most basic type of printer is in the category known as an impact printer. Impact printers, as their name suggests, use some form of impact and an inked printer ribbon to make an imprint on the paper. Impact printers also use a paper feed mechanism called a tractor feed that requires special paper. Perhaps you've seen it before—it's continuous-feed paper with holes running down both edges.
There are two major types of impact printers: daisy-wheel and dot-matrix. Each type has its own service and maintenance issues.
The first type of impact printer to know about is the daisy-wheel printer. This is one of the oldest printing technologies in use. These impact printers contain a wheel (called the daisy wheel because it looks like a daisy) with raised letters and symbols on each “petal” (see Figure 4.1). When the printer needs to print a character, it sends a signal to the mechanism that contains the wheel. This mechanism is called the print head. The print head rotates the daisy wheel until the required character is in place. An electromechanical hammer (called a solenoid) then strikes the back of the petal containing the character. The character pushes up against an inked ribbon that ultimately strikes the paper, making the impression of the requested character.
FIGURE 4.1 A daisy-wheel printer mechanism
Daisy-wheel printers were among the first types of impact printer developed. Their speed is rated by the number of characters per second (cps) they can print. The earliest printers could print only two to four characters per second. Aside from their poor speed, the main disadvantage of this type of printer is that it makes a lot of noise when printing—so much so, in fact, that special enclosures were developed to contain the noise. There is also no concept of using multiple fonts; the font is whatever the character on the wheel looks like.
The daisy-wheel printer has a few advantages, of course. First, because it is an impact printer, you can print on multipart forms (like carbonless receipts), assuming that they can be fed into the printer properly. Sometimes, you will hear this type of paper referred to as impact paper. Second, it is relatively inexpensive compared to the price of a laser printer of the same vintage. Finally, the print quality is easily readable; the level of quality was given a name: letter quality (LQ). Today, LQ might refer to quality that's better than an old-school typewriter (if you're familiar with them) but not up to inkjet standards.
The other type of impact printer to understand is the dot-matrix printer. These printers work in a manner similar to daisy-wheel printers, but instead of a spinning, character-imprinted wheel, the print head contains a row of pins (short, sturdy stalks of hard wire). These pins are triggered in patterns that form letters and numbers as the print head moves across the paper (see Figure 4.2).
FIGURE 4.2 Formation of images in a dot-matrix printer
The pins in the print head are wrapped with coils of wire to create a solenoid and are held in the rest position by a combination of a small magnet and a spring. To trigger a particular pin, the printer controller sends a signal to the print head, which energizes the wires around the appropriate print wire. This turns the print wire into an electromagnet, which repels the print pin, forcing it against the ink ribbon and making a dot on the paper. The arrangement of the dots in columns and rows creates the letters and numbers that you see on the page. Figure 4.2 illustrates this process.
The main disadvantage of dot-matrix printers is their image quality, which can be quite poor compared to the quality produced with a daisy wheel. Dot-matrix printers use patterns of dots to make letters and images, and the early dot-matrix printers used only nine pins to make those patterns. The output quality of such printers is referred to as draft quality—good mainly for providing your initial text to a correspondent or reviser. Each letter looked fuzzy because the dots were spaced as far as they could be and still be perceived as a letter or image. As more pins were crammed into the print head (17-pin and 24-pin models were eventually developed), the quality increased because the dots were closer together. Dot-matrix technology ultimately improved to the point that a letter printed on a dot-matrix printer was almost indistinguishable from daisy-wheel output. This level of quality is known as near letter quality (NLQ).
Dot-matrix printers are noisy, but the print wires and print head are covered by a plastic dust cover, making them quieter than daisy-wheel printers. They also use a more efficient printing technology, so the print speed is faster (typically starting around 72 cps). Some dot-matrix printers (like the Epson DFX series, which can run up to 1550 cps) can print close to a page per second! Finally, because dot-matrix printers are also impact printers, they can use multipart forms. Because of these advantages, dot-matrix printers quickly made daisy-wheel printers obsolete.
One of the most popular types of printers in use today is the inkjet printer. As opposed to impact printers, which strike the page, these printers spray ink on the page to form the image. Inkjet printers typically use a reservoir of ink, a pump, and a nozzle to accomplish this. Older inkjet printers were messy, noisy, and inefficient, but the technology is good enough now that you see plenty of photo printers using inkjet technology. You might also hear these types of printers referred to as bubble-jet printers, but that term is copyrighted by Canon. You can think of inkjets as spraying droplets of ink in a very high-definition dot-matrix pattern, although printer manufacturers would likely scoff at the comparison to an older technology.
In the following sections, you will learn the parts of an inkjet printer as well as how inkjet printers work.
Inkjet printers are simple devices. They contain very few parts (even fewer than dot-matrix printers) and, as such, are inexpensive to manufacture. It's common today to have a $40 to $50 inkjet printer with print quality that rivals that of basic laser printers.
The printer parts can be divided into the following categories:
The first part of an inkjet printer is the one that people see the most: the print head. This part of a printer contains many small nozzles (usually 100 to 200) that spray the ink in small droplets onto the page. Many times, the print head is part of the ink cartridge, which contains a reservoir of ink and the print head in a removable package. Most color inkjet printers include multiple print heads. Either there will be one for the black cartridge and one for the color one, or there will be one for each of the CMYK (cyan, magenta, yellow, and black) print inks. The print cartridge must be replaced as the ink supply runs out.
Inside the ink cartridge are several small chambers. At the top of each chamber are a metal plate and a tube leading to the ink supply. At the bottom of each chamber is a small pinhole. These pinholes are used to spray ink on the page to form characters and images as patterns of dots, similar to the way a dot-matrix printer works but with much higher resolution.
There are two methods of spraying the ink out of the cartridge. Hewlett-Packard (HP) popularized the first method: when a particular chamber needs to spray ink, an electric signal is sent to the heating element, energizing it. The elements heat up quickly, causing the ink to vaporize. Because of the expanding ink vapor, the ink is pushed out of the pinhole and forms a bubble. As the vapor expands, the bubble eventually gets large enough to break off into a droplet. The rest of the ink is pulled back into the chamber by the surface tension of the ink. When another drop needs to be sprayed, the process begins again. The second method, developed by Epson, uses a piezoelectric element (either a small rod or a unit that looks like a miniature drum head) that flexes when energized. The outward flex pushes the ink from the nozzle; on the return, it sucks more ink from the reservoir.
When the printer is done printing, the print head moves back to its maintenance station. The maintenance station contains a small suction pump and ink-absorbing pad. To keep the ink flowing freely, before each print cycle the maintenance station pulls ink through the ink nozzles using vacuum suction. The pad absorbs this expelled ink. The station serves two functions: to provide a place for the print head to rest when the printer isn't printing and to keep the print head in working order.
Another major component of the inkjet printer is the head carriage and the associated parts that make it move. The print head carriage is the component of an inkjet printer that moves back and forth during printing. It contains the physical as well as electronic connections for the print head and (in some cases) the ink reservoir. Figure 4.3 shows an example of a head carriage. Note the clips that keep the ink cartridge in place and the electronic connections for the ink cartridge. These connections cause the nozzles to fire, and if they aren't kept clean, you may have printing problems.
FIGURE 4.3 A print head carriage (holding two ink cartridges) in an inkjet printer
The stepper motor and belt make the print head carriage move. A stepper motor is a precisely made electric motor that can move in the same very small increments each time it is activated. That way, it can move to the same position(s) time after time. The motor that makes the print head carriage move is also often called the carriage motor or carriage stepper motor. Figure 4.4 shows an example of a stepper motor.
FIGURE 4.4 A carriage stepper motor
In addition to the motor, a belt is placed around two small wheels or pulleys and attached to the print head carriage. This belt, called the carriage belt, is driven by the carriage motor and moves the print head back and forth across the page while it prints. To keep the print head carriage aligned and stable while it traverses the page, the carriage rests on a small metal stabilizer bar. Figure 4.5 shows the entire system—the stepper motor, carriage belt, stabilizer bar, and print head carriage.
In addition to getting the ink onto the paper, the printer must have a way to get the paper into the printer. That's where the paper feed mechanism comes in. The paper feed mechanism picks up paper from the paper drawer and feeds it into the printer. This component consists of several smaller assemblies. First are the pickup rollers (see Figure 4.6), which are one or more rubber rollers with a slightly grippy texture; they rub against the paper as they rotate and feed the paper into the printer. They work against small cork or rubber patches known as separation pads (see Figure 4.7), which help keep the rest of the paper in place so that only one sheet goes into the printer. The pickup rollers are turned on a shaft by the pickup stepper motor.
FIGURE 4.5 Carriage stepper motor, carriage belt, stabilizer bar, and print head carriage in an inkjet printer
FIGURE 4.6 Inkjet pickup roller (center darker roller)
FIGURE 4.7 Inkjet separation pads
Sometimes the paper that is fed into an inkjet printer is placed into a paper tray, which is simply a small plastic tray in the front of the printer that holds the paper until it is fed into the printer by the paper feed mechanism. On smaller printers, the paper is placed vertically into a paper feeder at the back of the printer; it uses gravity, in combination with feed rollers and separation pads, to get the paper into the printer. No real rhyme or reason dictates which manufacturers use these different parts; some models use them, and some don't. Generally, more expensive printers use paper trays because they hold more paper. Figure 4.8 shows an example of a paper tray on an inkjet printer.
FIGURE 4.8 A paper tray on an inkjet printer
Next are the paper feed sensors. These components tell the printer when it is out of paper as well as when a paper jam has occurred during the paper feed process. Figure 4.9 shows an example of a paper feed sensor. Finally, there is the duplexing assembly, which allows for printing on both sides of a page. After the first side is printed and has a few seconds to dry, the duplexing assembly will pull the paper back in and flip it over to print the second side. Not all inkjet printers have one, but those that do will usually have them near the back bottom of the printer. Figure 4.10 shows the duplexing assembly rollers. This area is the most likely one to incur a paper jam, so there's usually a removable panel inside the printer for easy access to clear the problem.
FIGURE 4.9 A paper feed sensor on an inkjet printer
FIGURE 4.10 Duplexing assembly rollers
Being able to identify the parts of an inkjet printer is an important skill for an A+ candidate. In Exercise 4.1, you will identify the parts of an inkjet printer. For this exercise, you'll need an inkjet printer.
The final component group is the electronic circuitry for printer control, printer interfaces, and printer power. The printer control circuits are usually on a small circuit board that contains all the circuitry to run the stepper motors the way the printer needs them to work (back and forth, load paper and then stop, and so on). These circuits are also responsible for monitoring the health of the printer and for reporting that information back to the PC.
The second power component, the interface circuitry (commonly called a port), makes the physical connection to whatever signal is coming from the computer (USB, serial, network, infrared, etc.) and also connects the physical interface to the control circuitry. The interface circuitry converts the signals from the interface into the data stream that the printer uses.
The last set of circuits the printer uses is the power circuits. Essentially, these conductive pathways convert 110V (in the United States) or 220V (in most of the rest of the world) from a standard wall outlet into the voltages that the inkjet printer uses, usually 12V and 5V, and distribute those voltages to the other printer circuits and devices that need it. This is accomplished through the use of a transformer. A transformer, in this case, takes the 110V AC current and changes it to 12V DC (among others). This transformer can be either internal (incorporated into the body of the printer) or external. Either design can be used in today's inkjets, although the integrated design is preferred because it is simpler and doesn't show the bulky transformer.
Before you print to an inkjet printer, you must ensure that the device is calibrated. Calibration is the process by which a device is brought within functional specifications. For example, inkjet printers need their print heads aligned so that they print evenly and don't print funny-looking letters and unevenly spaced lines. The process is part of the installation for all inkjet printers. Printers will typically run a calibration routine every time you install new ink cartridges. You will only need to manually initiate a calibration if the printing alignment appears off.
Just as with other types of printing, the inkjet printing process consists of a set of steps that the printer must follow in order to put the data onto the page being printed. The following steps take place whenever you click the Print button in your favorite software (like Microsoft Word or Google Chrome):
The printer stores the received data in its onboard print buffer memory.
A print buffer is a small amount of memory (typically 512 KB to 16 MB) used to store print jobs as they are received from the printing computer. This buffer allows several jobs to be printed at once and helps printing to be completed quickly.
If the printer has not printed in a while, the printer's control circuits activate a cleaning cycle.
A cleaning cycle is a set of steps the inkjet printer goes through to purge the print heads of any dried ink. It uses a special suction cup and sucking action to pull ink through the print head, dislodging any dried ink or clearing stuck passageways.
Once the printer is ready to print, the control circuitry activates the paper feed motor.
This causes a sheet of paper to be fed into the printer until the paper activates the paper feed sensor, which stops the feed until the print head is in the right position and the leading edge of the paper is under the print head. If the paper doesn't reach the paper feed sensor in a specified amount of time after the stepper motor has been activated, the Out Of Paper light is turned on and a message is sent to the computer.
The motor is moved one small step, and the print head sprays the dots of ink on the paper in the pattern dictated by the control circuitry.
Typically, this is either a pattern of black dots or a pattern of CMYK inks that are mixed to make colors.
Then the stepper motor moves the print head another small step; the process repeats all the way across the page.
This process is so quick, however, that the entire series of starts and stops across the page looks like one smooth motion.
Once the page is finished, the feed stepper motor is actuated and ejects the page from the printer into the output tray. (On printers with a duplexing assembly, the paper is only partially ejected so the duplexing assembly can grab it, pull it back in, and flip it over for printing on the second side.)
If more pages need to print, the process for printing the next page begins again at step 7.
Laser printers and inkjet printers are referred to as page printers because they receive their print job instructions one page at a time rather than receiving instructions one line at a time. There are two major types of page printers that use the electrophotographic (EP) imaging process. The first uses a laser to scan the image onto a photosensitive drum, and the second uses an array of light-emitting diodes (LEDs) to create the image on the drum. Even though they write the image in different ways, both types still follow the laser printer imaging process. Since the A+ exam focuses on the laser printer imaging process and not on differences between laser and LED, we'll focus on the same here.
Xerox, Hewlett-Packard, and Canon were pioneers in developing the laser printer technology we use today. Scientists at Xerox developed the electrophotographic (EP) imaging process in 1971. HP introduced the first successful desktop laser printer in 1984, using Canon hardware that used the EP process. This technology uses a combination of static electric charges, laser light, and a black powdery ink-like substance called toner. Printers that use this technology are called EP process laser printers, or just laser printers. Every laser printer technology has its foundations in the EP printer imaging process.
Let's discuss the basic components of the EP laser printer and how they operate so that you can understand the way an EP laser printer works.
Most printers that use the EP imaging process contain nine standard assemblies: the toner cartridge, laser scanner, high-voltage power supply, DC power supply, paper transport assembly (including paper-pickup rollers and paper-registration rollers), transfer corona, fusing assembly, printer controller circuitry, and ozone filter. Let's discuss each of the components individually, along with a duplexing assembly, before we examine how they all work together to make the printer function.
The EP toner cartridge (see Figure 4.11), as its name suggests, holds the toner. Toner is a black carbon substance mixed with polyester resins to make it flow better and iron oxide particles to make it sensitive to electrical charges. These two components make the toner capable of being attracted to the photosensitive drum and of melting into the paper. In addition to these components, toner contains a medium called the developer (also called the carrier), which carries the toner until it is used by the EP process.
FIGURE 4.11 An EP toner cartridge
The toner cartridge also contains the EP print drum. This drum is coated with a photosensitive material that can hold a static charge when not exposed to light but cannot hold a charge when it is exposed to light—a curious phenomenon and one that EP printers exploit for the purpose of making images. Finally, the drum assembly contains a cleaning blade that continuously scrapes the used toner off the photosensitive drum to keep it clean.
As we mentioned earlier, the EP photosensitive drum can hold a charge if it's not exposed to light. It is dark inside an EP printer, except when the laser scanning assembly shines on particular areas of the photosensitive drum. When it does that, the drum discharges, but only in the area that has been exposed. As the drum rotates, the laser scanning assembly scans the laser across the photosensitive drum, exposing the image onto it. Figure 4.12 shows the laser scanning assembly.
FIGURE 4.12 The EP laser scanning assembly (side view and simplified top view)
The EP process requires high-voltage electricity. The high-voltage power supply (HVPS) provides the high voltages used during the EP process. This component converts AC current from a standard wall outlet (120V and 60 Hz) into higher voltages that the printer can use. This high voltage is used to energize both the charging corona and the transfer corona.
The high voltages used in the EP process can't power the other components in the printer (the logic circuitry and motors). These components require low voltages, between +5VDC and +24VDC. The DC power supply (DCPS) converts house current into three voltages: +5VDC and –5VDC for the logic circuitry and +24VDC for the paper transport motors. This component also runs the fan that cools the internal components of the printer.
The paper transport assembly is responsible for moving the paper through the printer. It consists of a motor and several rubberized rollers that each performs a different function.
The first type of roller found in most laser printers is the feed roller, or paper pickup roller (see Figure 4.13). This D-shaped roller, when activated, rotates against the paper and pushes one sheet into the printer. This roller works in conjunction with a special rubber separation pad to prevent more than one sheet from being fed into the printer at a time.
FIGURE 4.13 Paper transport rollers
Another type of roller that is used in the printer is the registration roller (also shown in Figure 4.13). There are actually two registration rollers, which work together. These rollers synchronize the paper movement with the image-formation process in the EP cartridge. The rollers don't feed the paper past the EP cartridge until the cartridge is ready for it.
Both of these rollers are operated with a special electric motor known as an electronic stepper motor. This type of motor can accurately move in very small increments. It powers all the paper transport rollers as well as the fuser rollers.
When the laser writes (exposes) the images on the photosensitive drum, the toner then sticks to the exposed areas. (We'll cover this later in the “Electrophotographic Imaging Process” section.) How does the toner get from the photosensitive drum onto the paper? The transfer corona assembly (see Figure 4.14) is given a high-voltage charge, which is transferred to the paper, which, in turn, pulls the toner from the photosensitive drum.
FIGURE 4.14 The transfer corona assembly
Included in the transfer corona assembly is a static-charge eliminator strip that drains away the charge imparted to the paper by the corona. If you didn't drain away the charge, the paper would stick to the EP cartridge and jam the printer.
There are two types of transfer corona assemblies: those that contain a transfer corona wire and those that contain a transfer corona roller. The transfer corona wire is a small-diameter wire that is charged by the HVPS. The wire is located in a special notch in the floor of the laser printer (under the EP print cartridge). The transfer corona roller performs the same function as the transfer corona wire, but it's a roller rather than a wire. Because the transfer corona roller is directly in contact with the paper, it supports higher speeds. For this reason, the transfer corona wire is used infrequently in laser printers today.
The toner in the EP toner cartridge will stick to just about anything, including paper. This is true because the toner has a negative static charge and most objects have a net positive charge. However, these toner particles can be removed by brushing any object across the page. This could be a problem if you want the images and letters to stay on the paper permanently.
To solve this problem, EP laser printers incorporate a device known as a fuser (see Figure 4.15), which uses two rollers that apply pressure and heat to fuse the plastic toner particles to the paper. You may have noticed that pages from either a laser printer or a copier (which uses a similar device) come out warm. This is because of the fuser.
FIGURE 4.15 The fuser
The fuser is made up of three main parts: a halogen heating lamp, a Teflon-coated aluminum-fusing roller, and a rubberized pressure roller. The fuser uses the halogen lamp to heat the fusing roller to between 329° F (165° C) and 392° F (200° C). As the paper passes between the two rollers, the pressure roller pushes the paper against the fusing roller, which melts the toner into the paper.
Another component in the laser printer that we need to discuss is the printer controller assembly. This large circuit board converts signals from the computer into signals for the various assemblies in the laser printer using a process known as rasterizing. This circuit board is usually mounted under the printer. The board has connectors for each type of interface and cables to each assembly.
When a computer prints to a laser printer, it sends a signal through a cable to the printer controller assembly. The controller assembly formats the information into a page's worth of line-by-line commands for the laser scanner. The controller sends commands to each of the components, telling them to wake up and begin the EP imaging process.
Your laser printer uses various high-voltage biases inside the case. As anyone who has been outside during a lightning storm can tell you, high voltages create ozone. Ozone is a chemically reactive gas that is created by the high-voltage coronas (charging and transfer) inside the printer. Because ozone is chemically reactive and can severely reduce the life of laser printer components, many older laser printers contain a filter to remove ozone gas from inside the printer as it is produced. This filter must be removed and cleaned with compressed air periodically. (Cleaning it whenever the toner cartridge is replaced is usually sufficient.) Most newer laser printers don't have ozone filters. This is because these printers don't use transfer corona wires but instead use transfer corona rollers, which dramatically reduce ozone emissions.
Any laser printer worth its money today can print on both sides of the paper (as can some nicer models of inkjet printers, mentioned earlier). This is accomplished through the use of a duplexing assembly. Usually located inside or on the back of the printer, the assembly is responsible for taking the paper, turning it over, and feeding back into the printer so the second side can be printed.
The electrophotographic (EP) imaging process is the process by which an EP laser printer forms images on paper. It consists of seven major steps, each designed for a specific goal. Although many different manufacturers word these steps differently or place them in a different order, the basic process is still the same. Here are the steps in the order in which you will see them on the exam:
Before any of these steps can begin, however, the controller must
sense that the printer is ready to start printing (toner cartridge
installed, fuser warmed to temperature, and all covers in place).
Printing cannot take place until the printer is in its ready state,
usually indicated by an illuminated Ready LED light or a display that
says something like 00 READY
(on HP printers). The computer
sends the print job to the printer, which begins processing the data as
the first step to creating output.
The processing step consists of two parts: receiving the image and creating the image. The computer sends the print job to the printer, which receives it via its print interface (USB, wireless, etc.). Then, the printer needs to create the print job in such a way that it can accurately produce the output.
If you think back to our discussion of impact printing earlier in this chapter, you might recall that impact printers produce images by creating one strip of dots at a time across the page. Laser printers use the same concept of rendering one horizontal strip at a time to create the image. Each strip across the page is called a scan line or a raster line.
A component of the laser printer called the Raster Image Processor (RIP) manages raster creation. Its responsibility is to generate an image of the final page in memory. How the raster gets created depends on the page-description language that your system is using, such as PostScript (PS) or Printer Control Language (PCL). (We will get into the details of PS and PCL in the “Page-Description Languages” section later in the chapter.) Ultimately, this collection of lines is what gets written to the photosensitive drum and onto the paper.
The next step in the EP process is the charging step (see Figure 4.16). In this step, a special wire or roller (called a charging corona) within the EP toner cartridge (above the photo-sensitive drum) gets high voltage from the HVPS. It uses this high voltage to apply a strong, uniform negative charge (around –600VDC) to the surface of the photosensitive drum.
FIGURE 4.16 The charging step of the EP process
Next is exposing the drum to the image, often referred to as the exposing step. In this step, the laser is turned on and scans the drum from side to side, flashing on and off according to the bits of information that the printer controller sends it as it communicates the individual bits of the image. Wherever the laser beam touches, the photosensitive drum's charge is severely reduced from –600VDC to a slight negative charge (around –100VDC). As the drum rotates, a pattern of exposed areas is formed, representing the image to be printed. Figure 4.17 shows this process.
FIGURE 4.17 The exposing step of the EP process
At this point, the controller sends a signal to the pickup roller to feed a piece of paper into the printer, where it stops at the registration rollers.
Now that the surface of the drum holds an electrical representation of the image being printed, its discrete electrical charges need to be converted into something that can be transferred to a piece of paper. The EP process step that accomplishes this is the developing step (see Figure 4.18). In this step, toner is transferred to the areas that were exposed in the exposing step.
FIGURE 4.18 The developing step of the EP process
A metallic roller called the developing roller inside an EP cartridge acquires a –600VDC charge (called a bias voltage) from the HVPS. The toner sticks to this roller because there is a magnet located inside the roller and because of the electrostatic charges between the toner and the developing roller. While the developing roller rotates toward the photosensitive drum, the toner acquires the charge of the roller (–600VDC). When the toner comes between the developing roller and the photosensitive drum, the toner is attracted to the areas that have been exposed by the laser (because these areas have a lesser charge, –100VDC). The toner also is repelled from the unexposed areas (because they are at the same –600VDC charge and like charges repel). This toner transfer creates a fog of toner between the EP drum and the developing roller.
The photosensitive drum now has toner stuck to it where the laser has written. The photosensitive drum continues to rotate until the developed image is ready to be transferred to paper in the next step.
At this point in the EP process, the developed image is rotating into position. The controller notifies the registration rollers that the paper should be fed through. The registration rollers move the paper underneath the photosensitive drum, and the process of transferring the image can begin; this is the transferring step.
The controller sends a signal to the charging corona wire or roller (depending on which one the printer has) and tells it to turn on. The corona wire/roller then acquires a strong positive charge (+600VDC) and applies that charge to the paper. Thus charged, the paper pulls the toner from the photosensitive drum at the line of contact between the roller and the paper because the paper and toner have opposite charges. Once the registration rollers move the paper past the corona wire, the static-eliminator strip removes all charge from that line of the paper. Figure 4.19 details this step. If the strip didn't bleed this charge away, the paper would be attracted to the toner cartridge and cause a paper jam.
FIGURE 4.19 The transferring step of the EP process
The toner is now held in place by weak electrostatic charges and gravity. It will not stay there, however, unless it is made permanent, which is the reason for the fusing step.
The penultimate step before the printer produces the finished product is called the fusing step. Here the toner image is made permanent. The registration rollers push the paper toward the fuser rollers. Once the fuser grabs the paper, the registration rollers push for only a short time longer. The fuser is now in control of moving the paper.
As the paper passes through the fuser, the 350° F fuser roller melts the polyester resin of the toner, and the rubberized pressure roller presses it permanently into the paper (see Figure 4.20). The paper continues through the fuser and eventually exits the printer.
FIGURE 4.20 The fusing step of the EP process
Once the paper completely exits the fuser, it trips a sensor that tells the printer to finish the EP process with the cleaning step.
In the last part of the laser imaging process, a rubber blade inside the EP cartridge scrapes any toner left on the drum into a used toner receptacle inside the EP cartridge, and a fluorescent lamp discharges any remaining charge on the photosensitive drum. (Remember that the drum, being photosensitive, loses its charge when exposed to light.) This step is called the cleaning step (see Figure 4.21).
FIGURE 4.21 The cleaning step of the EP process
The EP cartridge is constantly cleaning the drum. It may take more than one rotation of the photosensitive drum to make an image on the paper. The cleaning step keeps the drum fresh for each use. If you didn't clean the drum, you would see ghosts of previous pages printed along with your image.
At this point, the printer can print another page, and the EP process can begin again.
Figure 4.22 provides a diagram of all the parts involved in the EP printing process. Here's a summary of the process, which you should commit to memory:
FIGURE 4.22 The EP imaging process
The types of printers that you have learned about so far in this chapter account for 90 percent of all paper printers that are used with home or office computers and that you will see as a repair technician. The remaining 10 percent consist of other types of printers that primarily differ by the method they use to put colored material on the paper to represent what is being printed. Examples of these include solid ink, dye sublimation, and thermal printers. Keep in mind that, for the most part, these printers operate like other paper printers in many ways: they all have a paper feed mechanism (sheet-fed or roll); they all require consumables such as ink or toner and paper; they all use the same interfaces, for the most part, as other types of printers; and they are usually about the same size.
Thermal printing technology is primarily used in point-of-sale (POS) terminals. They print on special thermal paper, which is a kind of waxy paper that comes on a roll; it's heat-sensitive paper that turns black when heat passes over it. Thermal printers work by using a print head that is the width of the paper. When it needs to print, a heating element heats certain spots on the print head. The paper below the heated print head turns black in those spots. As the paper moves through the printer, the pattern of blackened spots forms an image on the page of what is being printed. Another type of thermal printer uses a heat-sensitive ribbon instead of heat-sensitive paper. A thermal print head melts wax-based ink from the ribbon onto the paper. These are called thermal transfer printers or thermal wax-transfer printers.
Thermal direct printers typically have long lives because they have few moving parts. The only unique part that you might not be as familiar with is the paper feed assembly, which often needs to accommodate a roll of paper instead of sheets. The paper is somewhat expensive, doesn't last long (especially if it is left in a very warm place, like a closed car in summer), and produces poorer-quality images than the paper used by most of the other printing technologies.
In 2011, the first commercially available 3D printer hit the market. Although the word “printing” is used, the technology and process are completely different from putting ink to paper. 3D printing is really a fabrication process, also called additive manufacturing. In it, a three-dimensional product is produced by “printing” thin layers of a material and stacking those layers on top of each other.
The first 3D printers were used in manufacturing environments. Over time, smaller, more economical models have been made for home use as well, although the technology still remains fairly expensive. There are two primary categories of 3D printers intended for the home and small business use. The first uses rolls of plastic filament to create objects, and the second uses a reservoir of liquid resin and UV light. We won't cover them here, but 3D printers in industrial applications can use a variety of materials, including aluminum, copper, and other metals. Some enterprising soul also created a 3D printer that prints using melted chocolate. They very likely deserve a Nobel Prize.
They can produce complex creations, but 3D filament (FDM) printers are relatively simple devices with few parts. For the examples here, we will use smaller 3D printers designed for home or small business use. Therefore, we'll focus on printers that use plastic filament as opposed to other materials. The primary components are as follows:
The frame holds the printer together. On the bottom of the printer will be the printing plate (or print bed), where the object is created. The extruder heats up and melts the filament, which is used to create the object. A cooling fan keeps the extruder from overheating. A PCB circuit board will be installed somewhere, to control the movement of the extruder assembly. Some printers will also have electronic displays and a clear protective case.Figure 4.23 shows a simple MakerBot 3D printer 24. It's relatively easy to see the frame, printing plate, display, and filament tube. 3D printers are connected to a computer using a USB cable.
On most 3D printers, the extruder is attached to metal rods that control the position of the extruder on x-, y-, and z-axes. As mentioned earlier, the extruder heats up and melts plastic filament. The extruder then moves around to create the object, adding one thin layer of material to the printing plate at a time. Figure 4.24 shows an extruder from a different 3D printer—it's the small black block at the bottom of the image. In this image, the filament tube is seen coming from the top of the extruder assembly.
FIGURE 4.23 A 3D filament printer
FIGURE 4.24 3D printer extruder
Filament comes on a spool, much like wire, and is shown in Figure 4.25. Be sure that the filament is compatible with the printer you intend to use it with. Here are the things to consider when purchasing replacement filament:
Replacing filament is a straightforward process. The 3D printer's app (or interface panel) will have a Replace Filament button or option. Once you start the process, the extruder will heat up and start to expel the current filament. At some point, it will tell you to replace the roll. You remove the old roll and feed the new filament through the filament tube into the extruder. After a short time, you will see the new color come through as melted filament (if you changed colors), and you can use the app or interface panel to stop the replacement.
FIGURE 4.25 3D printer PLA filament
3D resin printers, also called stereolithography/digital light processing printers (SLA/DLP), look and act markedly different from filament printers. SLA/DLP printers use a reservoir of liquid resin combined with UV light that hardens the resin to create objects. The print bed is often at the top, and the printed object appears to rise out of the liquid reservoir. SLA/DLP printers can print objects in much finer detail than FDM can, but they're also slower and require a bit more effort at the end of printing. Figure 4.26 shows a Formlabs resin printer, with a finished 3D print inside.
FIGURE 4.26 Formlabs Form 3 resin printer
Photo courtesy formlabs.com
Explaining how resin printing works is much easier with the use of a visual aid, so take a look at Figure 4.27. Most resin printers appear to be upside down. The print bed is on top and moves up as the image is printed. In the middle is the resin tank with a transparent bottom, filled with liquid resin. A light source at the bottom (an LCD in cheaper models and a laser in nicer ones) shines ultraviolet (UV) light on the resin to cure it. As the first layer of the object is “printed,” the print bed moves up slightly and the laser writes the next layer.
When the 3D object is finished printing, uncured resin is removed with a rinse of isopropyl alcohol. Some objects will be put into post-curing to strengthen them even more.
FIGURE 4.27 Resin printing in action
Every 3D printer comes with its own software that helps manage the printing process; therefore, you will see some nuances in the process from printer to printer. From a big-picture standpoint, though, the printing process is similar for all 3D printers. The following are general steps taken to get from idea to 3D printed object:
Design the object using a computer-aided design (CAD) program.
The most well-known commercial software for this is probably AutoCAD by Autodesk. Another option is the free Tinkercad.
Export the file from the CAD software. Doing so will cause the CAD program to “slice” the object into layers, preparing it for printing. The exported file will be an STL file.
This step will vary somewhat, depending on the 3D printer's software. In many cases, the STL file can be imported into the printer's app, and the app will slice the file yet again, formatting the model specifically for the printer. Some apps can't slice, though, so third-party slicing software is needed. Examples include Cura, SliceCrafter, and Slic3r. Most slicers are free, although commercial versions are available.
Small print jobs may take over an hour, depending on the printer and the size of the object. Larger jobs may take days to complete. The maximum object size will be determined by the model of printer. After the job is done, a little sanding or filing may be required to remove excess filament. A completed 3D print job (actually several jobs) is shown in Figure 4.28. In total, the objects are about 1.5" long. Higher-end 3D printers can create components that move, such as hinges and latches.
FIGURE 4.28 3D printed objects
Odds are that everyone either owns a printer or has easy access to a printer at a library, work, or some other place. Many retailers and computer manufacturers make it incredibly easy to buy a printer because they often bundle a printer with a computer system as an incentive to get you to buy.
The CompTIA A+ 220-1101 exam will test your knowledge of the procedures to install and maintain printers. In the following sections, we will discuss connecting printers through various interfaces, installing and sharing local and networked printers, implementing network printer security and scan services, performing printer maintenance, and installing printer upgrades.
A printer's interface is the collection of hardware and software that allows the printer to communicate with a computer. The hardware interface is commonly called a port. Each printer has at least one interface, but some printers have several interfaces, to make them more flexible in a multiplatform environment. If a printer has several interfaces, it can usually switch between them on the fly so that several computers can print at the same time.
An interface incorporates several components, including its interface type and the interface software. Each aspect must be matched on both the printer and the computer. For example, an HP LaserJet M480F has only a USB port. Therefore, you must use a USB cable (or wireless networking) as well as the correct software for the platform being used (for example, a Mac HP LaserJet M480F driver if you connect it to an iMac computer).
When we say interface types, we're talking about the ports used in getting the printed information from the computer to the printer. There are two major classifications here: wired and wireless. Wired examples are serial, parallel, USB, and Ethernet. Wireless options include 802.11 and Bluetooth. You've learned about the wired connections in Chapter 3, “Peripherals, Cables, and Connectors,” and you will learn more about the wireless connections in Chapter 5, “Networking Fundamentals.” Here you will learn how they apply to printers.
When computers send data serially, they send it 1 bit at a time, one after another. The bits stand in line like people at a movie theater, waiting to get in. Old-time serial (DB-9) connections were painfully slow, but new serial technology (Thunderbolt, eSATA, and others) makes it a more viable option than parallel. While it's quite common to see USB (another type of serial connection) printers on the market, it's rare to find any other types of serial printers out there.
When a printer uses parallel communication, it is receiving data 8 bits at a time over eight separate wires (one for each bit). Parallel communication was the most popular way of communicating from computer to printer for many years, mainly because it was faster than serial. In fact, the parallel port became so synonymous with printing that a lot of companies simply started referring to parallel ports as printer ports. Today, though, parallel printers are rare. The vast majority of wired printers that you see will be USB or Ethernet.
A parallel cable consists of a male DB-25 connector that connects to the computer and a male 36-pin Centronics connector that connects to the printer. Most of the cables are shorter than 10 feet. The industry standard that defines parallel communications is IEEE 1284; parallel cables should be IEEE 1284–compliant.
The most popular type of wired printer interface is the Universal Serial Bus (USB). In fact, it is the most popular interface for just about every peripheral. The convenience for printers is that it has a higher transfer rate than older serial or parallel connections, and it automatically recognizes new devices. And, of course, USB is physically very easy to connect.
Many printers sold today have a wired Ethernet interface that allows them to be hooked directly to an Ethernet cable. These printers have an internal network interface card (NIC) and ROM-based software that allow them to communicate on the network with servers and workstations.
As with any other networking device, the type of network interface used on the printer depends on the type of network to which the printer is being attached. It's likely that the only connection type that you will run into is RJ-45 for an Ethernet connection.
The latest trend in printer interface technology is to use wireless. Clearly, people love their Wi-Fi because it enables them to roam around their home or office and still remain connected to one another and to their network. It logically follows that someone came up with the brilliant idea that it would be nice if printers could be that mobile as well—after all, many are on carts with wheels. Some printers have built-in Wi-Fi interfaces, while others can accept wireless network cards. Wi-Fi–enabled printers support nearly all 802.11 standards (a, b, g, n, ac, ax), and the availability of devices will mirror the current popularity of each standard.
The wireless technology that is especially popular among peripheral manufacturers is Bluetooth. Bluetooth is a short-range wireless technology; most devices are specified to work within 10 meters (33 feet). Printers such as the HP Sprocket series and OfficeJet 150 mobile printers have Bluetooth capability.
When printing with a Bluetooth-enabled device (like a smartphone or tablet) and a Bluetooth-enabled printer, all you need to do is get within range of the device (that is, move closer), select the printer driver from the device, and choose Print. The information is transmitted wirelessly through the air using radio waves and is received by the device.
Now that we've looked at the ways that you can connect your printer, it's time to face a grim reality: computers and printers don't know how to talk to each other. They need help. That help comes in the form of interface software used to translate software commands into commands that the printer can understand.
There are two major components of interface software: the page-description language and the driver software. The page-description language (PDL) determines how efficient the printer is at converting the information to be printed into signals that the printer can understand. The driver software understands and controls the printer and must be written to communicate between a specific operating system and a specific printer. It is very important that you use the correct interface software for your printer. If you use either the wrong page-description language or the wrong driver software, the printer will print garbage—or possibly nothing at all.
A page-description language works just as its name implies: it describes the whole page being printed by sending commands that describe the text as well as the margins and other settings. The controller in the printer interprets these commands and turns them into laser pulses (or pin strikes). Several printer communication languages exist, but the three most common are PostScript (PS), Printer Control Language (PCL), and Graphics Device Interface (GDI).
The first page-description language was PostScript. Developed by Adobe, it was first used in the Apple LaserWriter printer. It made printing graphics fast and simple. Here's how PostScript works. The PostScript printer driver describes the page in terms of “draw” and “position” commands. The page is divided into a very fine grid (as fine as the resolution of the printer). When you want to print a square, a communication like the following takes place:
POSITION 1,42%DRAW 10%POSITION 1,64%DRAW10D% . . .
These commands tell the printer to draw a line on the page from line 42 to line 64 (vertically). In other words, a page-description language tells the printer to draw a line on the page and gives it the starting and ending points—and that's that. Rather than send the printer the location of each and every dot in the line and an instruction at each and every location to print that location's individual dot, PostScript can get the line drawn with fewer than five instructions. As you can see, PostScript uses commands that are more or less in English. The commands are interpreted by the processor on the printer's controller and converted into the print-control signals.
When HP developed PCL, it was originally intended for use with inkjet printers as a competitor to PostScript. Since then, its role has been expanded to virtually every printer type, and it's a de facto industry standard.
GDI is actually a Windows component and is not specific to printers. Instead, it's a series of components that govern how images are presented to both monitors and printers. GDI printers work by using computer processing power instead of their own. The printed image is rendered to a bitmap on the computer and then sent to the printer. This means that the printer hardware doesn't need to be as powerful, which results in a less expensive printer. Generally speaking, the least expensive laser printers on the market are GDI printers.
The main advantage of page-description languages is that they move some of the processing from the computer to the printer. With text-only documents, they offer little benefit. However, with documents that have large amounts of graphics or that use numerous fonts, page-description languages make the processing of those print jobs happen much faster. This makes them an ideal choice for laser printers, although nearly every type of printer uses them.
The driver software controls how the printer processes the print job. When you install a printer driver for the printer you are using, it allows the computer to print to that printer correctly (assuming that you have the correct interface configured between the computer and printer). The driver must be written specifically for the operating system the computer is using and for the printer being used. In other words, Mac clients need a different driver than Windows clients need, even to print to the same printer.
When you need to print, you select the printer driver for your printer from a preconfigured list. The driver that you select has been configured for the type, brand, and model of printer, as well as the computer port to which it is connected. You can also select which paper tray the printer should use as well as any other features the printer has (if applicable). Also, each printer driver is configured to use a particular page-description language.
Although every device is different, there are certain accepted methods used for installing almost all of them. The following procedure works for installing many kinds of devices:
Before installing the printer, scout out the best location for it. If it's a home-based printer, you may want to choose a convenient but inconspicuous location. In an office setting, having the printer centrally located may save a lot of headaches. Consider the users and how easy or difficult it will be for them to get to the printer. Also consider connectivity. For a wireless printer, how close is it to an access point? (It should be close.) If it's a wired printer, it will need to be near an RJ-45 wall jack. Finally, always choose a flat, stable surface. After you determine the location, be sure to carefully unbox the device. You may need a box cutter, scissors, or other types of tools to get the box open without destroying it. Avoid dropping or banging the printer. Some printers are very heavy, so they may be easy to drop—employ team lifting if needed. If it's a laser printer with toner cartridges installed, don't turn it on its side or upside down. Most printers will come with a quick setup guide—often a glossy poster-sized printout—that will show you how to properly unbox and connect the device. When you are done with the box, save it if you think you might need to move the printer later, or recycle it and the packing materials.
After you have unboxed the printer, with the device powered off, connect it to the host computer. Today, the vast majority of local printers are USB, but you will occasionally find ones that use different ports as well.
Once you have connected the device, connect power to it using whatever supplied power adapter comes with it. Some devices have their own built-in power supply and just need an AC power cord connecting the device to the wall outlet, while others rely on an external transformer and power supply. Finally, turn on the device.
Once you have connected and powered up the device, Windows should automatically recognize it. When it does, a screen will pop up saying that Windows is installing the driver. If a driver is not found, you will be given the option to specify the location of the driver. You can insert the driver media (flash drive, DVD, etc.) that came with the device, and the wizard will guide you through the device driver installation.
If Windows fails to recognize the device, you can start the process
manually by initiating the Add Printer Wizard to troubleshoot the
installation and to install the device drivers. To start the wizard in
Windows 10, click Start, type
printer
, and then click
Printers & Scanners when it appears under Best Match. Click Add A
Printer Or Scanner, as shown in Figure 4.29. If a printer is not
detected, a link will appear with the words “The printer that I want
isn't listed.” Click that link, and it will start the wizard shown in Figure
4.30.
FIGURE 4.29 Printers & scanners
Once the driver is installed, the device will function. But some devices, such as inkjet printers, must be calibrated. If the printer requires this step, it will tell you. You'll need to walk through a few steps before the printer will print, but instructions will be provided either on your computer screen or on the printer's display.
FIGURE 4.30 Add Printer Wizard
Each manufacturer's process is different, but a typical alignment/calibration works like this:
Once you have installed the software and calibrated the device, you can configure any options that you would like for the printer. All the settings and how to change them can be found online or in your user manual.
Where you configure specific printer properties depends a lot on the printer itself. As a rule of thumb, you're looking for the Printer Properties or Printing Preferences applet. In Windows 10, if you open Printers & Scanners, you will see the list of printers that are installed. Clicking an installed printer will show three buttons: Open Queue, Manage, and Remove Device, as shown in Figure 4.31. The Open Queue button lets you manage print jobs, and the Remove Device button is self-explanatory. Click Manage to get the screen like the one shown in Figure 4.32. Here you have options to print a test page, as well as the Printer properties and Printing preferences link. Figure 4.33 shows the General tab of the Printer Properties window, and Figure 4.34 shows the Printing Preferences window.
FIGURE 4.31 Three printer management buttons
From the printer's Properties dialog box (Figure 4.33), you can configure nearly any option that you want to for your printer. The Properties dialog box will be pretty much the same for any printer that you install, and we'll cover a few options here in a minute. First, though, notice the Preferences button on the General tab. Clicking the Preferences button is another way to get to Printing Preferences (Figure 4.34). That window will have configuration options based on your specific model of printer. Usually, though, this is where you can find orientation (portrait or landscape), duplexing, quality, color, and paper tray settings (if applicable) for the printer.
FIGURE 4.32 Manage your device options
FIGURE 4.33 Printer Properties
FIGURE 4.34 Printing Preferences
Now back to the Properties dialog box. The printer's Properties dialog box is less about how the printer does its job and more about how people can access the printer. From the Properties dialog box, you can share the printer, set up the port that it's on, configure when the printer will be available throughout the day, and specify who can use it. Let's take a look at a few key tabs. We've already taken a look at the General tab, which has the Preferences button as well as the all-important Print Test Page button. It's handy for troubleshooting!
Figure 4.35 shows the Sharing tab. If
you want other users to be able to print to this printer, you need to
share it. Notice the warnings above the Share This Printer check box.
Those are important to remember. When you share the printer, you give it
a share name. Network users can map the printer through their own Add
Printer Wizard (choosing a networked printer) and by using the standard
\\computer_name\share_name
convention. One other important feature to call out on this tab is the
Additional Drivers button. This one provides a description that is
fairly self-explanatory. Permissions for user authentication are managed
through the Security tab, which is shown in Figure
4.36.
FIGURE 4.35 Printer Properties Sharing tab
FIGURE 4.36 Printer Properties Security tab
FIGURE 4.37 Turning on file and printer sharing
Figure 4.38 shows the Ports tab. Here you can configure your printer port and add and delete ports. There's also a check box to enable printer pooling. This would be used if you have multiple physical printers that operate under the same printer name.
FIGURE 4.38 Printer Properties Ports tab
Figure 4.39 shows the important Advanced tab of the printer's Properties dialog box. On this tab, you can configure the printer to be available during only certain hours of the day. This might be useful if you're trying to curtail after-hours printing of non–work-related documents, for example. You can also configure the spool settings. For faster printing, you should always spool the jobs instead of printing directly to the printer. However, if the printer is printing garbage, you can try printing directly to it to see if the spooler is causing the problem.
Regarding the check boxes at the bottom, you will always want to print spooled documents first because that speeds up the printing process. If you need to maintain an electronic copy of all printed files, select the Keep Printed Documents check box. Keep in mind that doing so will eat up a lot of hard disk space and could potentially create a security risk.
Finally, the Printing Defaults button takes you to the Printing Preferences window (shown earlier in Figure 4.34). The Print Processor button lets you select alternate methods of processing print jobs (not usually needed), and the Separator Page button lets you specify a file to use as a separator page (a document that prints out at the beginning of each separate print job, usually with the user's name on it), which can be useful if you have several (or several dozen) users sharing one printer.
FIGURE 4.39 Printer Properties Advanced tab
Once you have configured your printer, you are finished and can print a test page to test its output. Windows has a built-in function for doing just that—you saw links to do so in Figure 4.32 and Figure 4.33. Click the link or button, and Windows will send a test page to the printer. If the page prints, your printer is working. If not, double-check all your connections. If they appear to be in order, then skip ahead to Chapter 12 for troubleshooting tips.
Once your printer is installed and you have printed a test page, everything else should work well, right? That's usually true, but it's good practice to verify compatibility with applications before you consider the device fully installed.
With printers, this process is rather straightforward. Open the application you're wondering about and print something. For example, open Microsoft Word, type in some gibberish (or open a real document, if you want), and print it. If you are running non-Microsoft applications (such as a computer-aided drafting program or accounting software) and have questions about their compatibility with the printer, try printing from those programs as well.
Most users today know how to print, but not everyone knows how to install the right printer or how to print efficiently. This can be a significant issue in work environments.
Say your workplace has 10 different printers, and you just installed number 11. First, your company should use a naming process to identify the printers in a way that makes sense. Calling a printer HPLJ4 on a network does little to help users understand where that printer is in the building. Naming it after its physical location might make more sense.
After installing the printer, offer installation assistance to those who might want to use the device. Show users how to install the printer in Windows (or if printer installation is automated, let them know that they have a new printer and where it is). Also, let users know the various options available on that printer. Can it print double-sided? If so, you can save a lot of paper. Show users how to configure that. Is it a color printer? Do users really need color for rough drafts of documents or presentations? Show users how to print in black and white on a color printer to save the expensive color ink or toner cartridges.
On the printer we've used as an example in this chapter, most of the options involving print output are located in Preferences (refer to Figure 4.34). Two of them are on the Printing Shortcut tab: Duplex (or Print On Both Sides) and Print Quality (Best, Normal, Draft). Orientation (Portrait or Landscape) is set on the Layout tab. This printer does not have a collate feature, which is used if you are printing several copies of a longer document. Collation enables you to select whether you want it to print pages in order (1, 2, 3… 1, 2, 3… and so on) or multiple copies of the same page at once (1, 1, 1… 2, 2, 2… and so forth).
In Exercise 4.2, we'll step through the process of installing a USB printer in Windows 10.
The previous section was about installing a printer attached to your local computer. There are advantages to that approach, such as being able to manage and control your own printer, not to mention having a printer at your own desk. That doesn't happen often in the business world these days!
There are some big disadvantages as well. First, it means that all users who need to print to your device may need local accounts on your computer, unless you are on a network domain. If so, you will need to manage security for these accounts and the printer. Second, your computer is the print server. The print server is the device that hosts the printer and processes the necessary printer commands. This can slow your system down. Third, because your computer is the print server, if for any reason it's turned off, no one will be able to print to that device.
There is another option, though. Instead of needing a specific computer to be the print server, why not make the print server part of the printer itself, or make it a separate network device that hosts the printers? That is exactly the principle behind network printing. Next, we will cover two types of network printing—local network printing and cloud printing—as well as talk about data privacy concerns with printing to public or shared printers.
The key to local network printing is that you are moving the print server from your computer to another location, accessible to other users on the network. Therefore, the print server needs a direct attachment to the network, via either a wired (RJ-45) or wireless connection. You will find two major varieties of print servers. The first, called an integrated print server, is incorporated into the printer itself, and the second is a separate hardware print server. If you are using a stand-alone print server, the printers attach to the print server, either physically or logically. In most cases, if a printer is capable of connecting directly to a network, it has the capability to be its own print server.
Installing and using a networked printer is very similar to installing and using a local printer. You need to ensure that both devices are plugged in, turned on, and attached to the network (either with an RJ-45 Ethernet connection or by using wireless). Probably the biggest difference is that when you install it, you need to tell your computer that you are adding a networked printer instead of a local printer. For example, in Windows 10, when you open the Add Printer utility (shown in Figure 4.30), you choose Add A Bluetooth, Wireless Or Other Network Discoverable Printer instead of Add A Local Printer. From there, you will be asked to install the printer driver, just as if the printer were directly attached to your computer. Once it's installed, you use it just as you would use a local printer, including setting the configuration options that we looked at in earlier sections. Every computer on the local network should be able to see and add the printer in the same way.
There are a few other ways that you can add shared networked printers: by using TCP, Bonjour, or AirPrint.
Printers that are network-aware need IP addresses, so it makes sense that you can add a networked printer by using TCP/IP, also known as TCP printing. Exercise 4.3 walks you through the general process of installing a TCP printer, using Windows 10 as an example.
Some installations will ask you which TCP printing protocol you want to use, RAW or LPR. RAW (also called the Standard TCP/IP Port Monitor) is the default, and it uses TCP port 9100. It also uses the Simple Network Management Protocol (SNMP) for bidirectional communication between the computer and the printer. LPR is older, and the protocol is included for use with legacy systems. It's limited to source ports 721–731 and the destination port 515.
After the printer is installed, it will appear in your Printers & Scanners window, just as any other printer would.
There are a few advantages to using TCP printing. First, it sends the print jobs directly to the printer, so your system does not need to act as the print server or spend processing time dealing with formatting the print job. Second, it allows clients with different OSs, such as Linux or macOS, to add printers without worrying about intra-OS conflicts.
Apple introduced Bonjour in 2002 (then under the name Rendezvous) as an implementation of zero configuration networking. It's designed to enable automatic discovery of devices and services on local networks using TCP/IP as well as to provide hostname resolution. Currently, it comes installed by default on Apple's macOS and iOS operating systems. Bonjour makes it easy to discover and install printers that have been shared by other Bonjour-enabled clients on the network.
Even though Apple developed Bonjour, it does work on other operating systems. For example, it comes with iTunes and the Safari browser, so if you have either of those installed on a Windows computer, odds are that you have Bonjour as well. Once installed, the Bonjour service starts automatically and scans the network looking for shared devices. Exercise 4.4 shows you how to see if Bonjour is installed in Windows.
Bonjour works only on a single broadcast domain, meaning that it will not find a printer or other device if it's on the other side of a router from your computer. All major printer manufacturers support Bonjour technology.
If you are using a Mac, adding a Bonjour printer is easy. You open System Preferences ➢ Print And Scan, click the plus sign under Printers to open the Add Printer window, and look for the printer on the list. If the Mac doesn't have the driver available, you will be asked to provide it. Otherwise, you're done.
In order to add or share a Bonjour printer from Windows, you need to
download Bonjour Print Services for Windows. It's found on Apple's
support site at https://support.apple.com/kb/dl999
.
The one big complaint that Apple aficionados had about Bonjour was that it didn't support printing from iPhones or iPads. In 2010, Apple introduced AirPrint to meet that need.
The idea behind AirPrint is quite simple. Mobile devices can
automatically detect AirPrint-enabled printers on their local network
and print to them without requiring the installation of a driver. To be
fair, what Apple really did was eliminate the need for a specific
printer driver to be installed on the client and replaced it with the
AirPrint concept. Then it was up to the printer manufacturers to develop
their own drivers that talked to AirPrint. HP was happy to oblige with
its Photosmart Plus series, and other manufacturers soon followed. The
list of AirPrint-enabled printers is available at https://support.apple.com/en-us/HT201311
.
From the end-user standpoint, though, no driver is required.
There really is no installation process, and printing is easy. Just be sure that your mobile device is on the same local network as an AirPrint printer. When you attempt to print from your device, select the printer to which you want to print, and it should work.
When printing to a public printer, or one that is shared in a common workspace, there may be data privacy concerns. For example, employees in Human Resources (HR) might need to print confidential personnel files, or someone in the Mergers group might have a highly restricted contract to print. Let's take a look at some security options for networked printers.
Requiring users to authenticate (log in) to the printer is one step that can improve printer security. Not all printers have this capability, but most newer laser printers and MFDs designed for office use will. Figure 4.42 shows the front of a Xerox AltaLink MFD. In the center of the picture is a touchscreen display where a user can copy, scan to email, look at printer jobs, configure the device, and use a secure print feature called SafeQ. If a user taps the LogIn button in the upper-left corner, they will be presented with a keyboard so that they can enter their username and password. While functional, this is a bit old-school. An easier way for many users is to scan their work badge (this is called badging) on the badge reader at the left of the unit. Doing so will automatically log them in and provide access to secure printing features.
FIGURE 4.42 Xerox AltaLink badge scanner and touchscreen
Printing a document is usually pretty straightforward. On the computer, the user hits some sort of Print button, chooses the printer to send it to, and presses Print again (or OK, or something similar). Then they get up and walk to the printer to retrieve the hard copy. Most office denizens have been well trained in this process.
Some print jobs might contain sensitive information, though, so the user wants to ensure that they are physically at the printer before it starts printing. Or, perhaps the user has printed to a device in a different building and wants the printer to wait to start printing until they can get there. In cases like these, a secured prints feature can be used to hold the print job until the user is ready for it.
Looking back at Figure 4.42, you can see a feature on the bottom center of the touchscreen called SAFEQ Print. YSoft SAFEQ is an industry-standard enterprise print management suite adopted by many organizations, and it works seamlessly with many printers. The administrator sets it up on the printer (and we mean logical printer here, not necessarily the physical device), and when the user prints to that printer, the job waits for them until they authenticate on the physical printer and tell it to start. Figure 4.43 shows the SAFEQ user authentication screen. Once logged in, the user will be presented with the job(s) to print, as shown in Figure 4.44.
FIGURE 4.43 SAFEQ authentication screen
FIGURE 4.44 Secured print job
Being able to see who used (or perhaps abused) a printer after the fact can come in handy. Some printers have the ability to save a list of documents that have been printed as an audit log. The Xerox printer we've used as an example in this section does just that, and the log is shown in Figure 4.45.
Other printers will integrate logging software into the operating system's standard logging utilities. For example, some HP printers will install an audit log into Windows Event Viewer. (We will cover Event Viewer in Chapter 14, “Windows 10 Configuration.”) Third-party audit software is also available for use.
FIGURE 4.45 Xerox print log
Scanning is really the opposite of printing. Instead of printing to get electronic information onto paper, scanning takes printed information and stores it electronically. In the office environment, the two most popular types of scanners are flatbed scanners and automatic document feeder (ADF) scanners.
To use a flatbed scanner, simply open the lid and lay the document to be scanned on the scanner glass, aligning it to the proper corner as shown in Figure 4.46. There will be an icon on the scanning bed that shows where the document should go. Close the lid and press Scan on the touchscreen, and scanning will begin.
If you have more than one document to scan, using a flatbed scanner can be a real pain. Fortunately, some scanners have an attachment called an automatic document feeder (ADF) that lets you scan multiple pieces of paper in one job. Figure 4.47 shows an example with paper loaded in it. It's common to have ADFs that allow for up to 50 pages to be scanned at once.
FIGURE 4.46 A flatbed scanner
FIGURE 4.47 Automatic document feeder (ADF) on an MFD
Whenever you scan a document, you need to figure out where to send it. MFDs don't usually have the memory to save images of scans—besides, the point of scanning a document is usually to email it to someone or save it on a hard drive for later retrieval. Let's take a look at three different ways to send or save scanned materials. For all three of these options, it's assumed that the scanner is connected to the network.
FIGURE 4.48 Scan to email
Scan to Folder An alternative option to emailing a scanned file is to save it in a network folder. This is a particularly viable solution if the scanned file is too large to be emailed.
The protocol the printer uses to transport the file from itself to the network folder is called Server Message Block (SMB). In addition, the administrator must set up the MFD to support SMB scanning, and the recipient folder needs to be properly shared and secured too. Performing the scan from the MFD is done via a screen similar to the one shown in Figure 4.48, except instead of selecting an email recipient, you would navigate to the folder where you want to save the file.
Considering the amount of work they do, printers last a pretty long time. Some printers handle over 100,000 pages per month, yet they're usually pretty reliable devices. You can help your printers live long and fulfilling lives by performing the right maintenance, and smoothly running printers always make your officemates happy. After all, going to get your print job from the printer and discovering that the printer is in the shop is a very frustrating experience!
Regardless of the type of printer you use, giving it a regular checkup is a good idea. You're probably familiar with some of the activities that fall under maintenance, such as replacing paper, ink, or toner cartridges. We'll look at those as well as some additional, more involved maintenance procedures.
To maintain a printer properly, you need to replace consumables such as toner or ink cartridges, assemblies, filters, and rollers on occasion. Trying to cut costs by buying cheaper supplies rarely pays off.
Whenever purchasing supplies for your printer, always get supplies from the manufacturer or from an authorized reseller. This way, you'll be sure that the parts are of high quality. Using unauthorized parts can damage your printer and possibly void your warranty.
Most people don't give much thought to the kind of paper they use in their printers. It's a factor that can have a tremendous effect on the quality of the hard-copy printout, however, and the topic is more complex than people think. For example, the wrong paper can cause frequent paper jams and possibly even damage components.
Several aspects of paper can be measured; each gives an indication as to the paper's quality. The first factor is composition. Paper is made from a variety of substances. Paper used to be made from cotton and was called rag stock. It can also be made from wood pulp, which is cheaper. Most paper today is made from the latter or a combination of the two.
Another aspect of paper is the property known as basis weight (or simply weight). The weight of a particular type of paper is the actual weight, in pounds, of a ream (500 sheets) of the standard size of that paper made of that material. For regular bond paper, that size is 17 × 22.
The final paper property we'll discuss is the caliper (or thickness) of an individual sheet of paper. If the paper is too thick, it may jam in feed mechanisms that have several curves in the paper path. On the other hand, a paper that's too thin may not feed at all.
These are just three of the categories we use to judge the quality of paper. Because there are so many different types and brands of printers as well as paper, it would be impossible to give the specifications for the “perfect” paper. However, the documentation for any printer will give specifications for the paper that should be used in that printer.
Many impact printers need to use special paper that has tractor feed perforations on the side, or they will not work properly. When replacing tractor feed paper, it's very easy to get it misaligned, and it will feed crookedly and ultimately jam the printer. Similarly, thermal printers also require special paper that needs to be loaded properly. In many cases, if you load it upside down, the unit will not produce images. By comparison, adding paper to a laser or inkjet printer is usually very easy.
The area in which using recommended supplies is the biggest concern is ink and toner cartridges. Using the wrong ink or toner supplies is the easiest way to ruin a perfectly good printer.
Dot-matrix printers use a cloth or polyester ribbon soaked in ink and coiled up inside a plastic case. This assembly is called a printer ribbon (or ribbon cartridge). Once the ribbon has run out of ink, it must be discarded and replaced. Ribbon cartridges are developed closely with their respective printers. For this reason, ribbons should be purchased from the same manufacturer as the printer. The wrong ribbon could jam in the printer as well as cause quality problems.
Inkjet cartridges have a liquid ink reservoir. The ink in these cartridges is sealed inside. Once the ink runs out, the cartridge must be removed and discarded. A new, full cartridge is installed in its place. Because the ink cartridge contains the printing mechanism as well as ink, it's like getting a new printer every time you replace the ink cartridge.
In some inkjet printers, the ink cartridge and the print head are in separate assemblies. This way, the ink can be replaced when it runs out, and the print head can be used several times. This approach works fine if the printer is designed to work this way. However, some people think that they can do this on their integrated cartridge/print head system, using special ink cartridge refill kits. These kits consist of a syringe filled with ink and a long needle. The needle is used to puncture the top of an empty ink cartridge, and the syringe is then used to refill the reservoir.
The final type of consumable is toner. Each model of laser printer uses a specific toner cartridge. You should check the printer's manual to see which toner cartridge your printer needs. Many businesses will recycle your toner or ink cartridges for you, refill them, and sell them back to you at a discount. Don't buy them. While some businesses that perform this “service” are more legitimate than others, using recycled parts is more dangerous to your hardware than using new parts. The reason for this is that refilled cartridges are more likely to break or leak than new parts, and this leakage could cause extensive damage to the inside of your printer. And again, using secondhand parts can void your warranty, so you're left with a broken printer that you have to pay for. Avoid problems like this by buying new parts.
When shopping for a printer, one of the characteristics you should look for is the printer's capacity, which is often quoted in monthly volume. This is particularly important if the printer will be serving in a high-load capacity. Every printer needs periodic maintenance, but printers that can handle a lot of traffic typically need it less frequently. Check the printer specifications to see how often scheduled maintenance is suggested. Never, ever fall behind on performing scheduled maintenance on a printer.
Many laser printers have LCD displays that provide useful information, such as error messages or notices that you need to replace a toner cartridge. The LCD display will also tell you when the printer needs scheduled maintenance. How does it know? Printers keep track of the number of pages they print, and when the page limit is reached, they display a message, usually something simple like Perform user maintenance. The printer will still print, but you should perform the maintenance.
Being the astute technician that you are, you clean the printer with the recommended cleaning kit or install the maintenance kit that you purchased from the manufacturer. Now, how do you get the maintenance message to go away? Reset the page count using a menu option. For example, on many HP laser printers, you press the Menu button until you get to the Configuration menu. Once there, you press the Item key until the display shows Service Message = ON. Then press the plus key (+) to change the message to Service Message = OFF. Bring the printer back online, and you're ready to go.
Performing routine maintenance will keep the printer clean, make it last longer, and help prevent annoying paper jams.
With all of the ink or toner they use, printers get dirty. If printers get too dirty or if the print heads get dirty, you'll notice print problems. No one wants this to happen.
Most printers have a self-cleaning utility that is activated through a menu option or by pressing a combination of buttons on the printer itself. It's recommended that you run the cleaning sequence every time you replace the toner or ink cartridges. If you experience print-quality problems, such as lines in the output, run the cleaning routine.
Sometimes, the self-cleaning routines aren't enough to clear up the problem. If you are having print-quality issues, you might want to consider purchasing a cleaning or maintenance kit, which frequently comes with a cleaning solution.
Each cleaning kit comes with its own instructions for use. Exercise 4.6 walks you through the steps of using an inkjet cleaning solution. Note that the steps for your printer might differ slightly; please consult your manual for specific instructions. After using a cleaning kit on a laser or inkjet printer, it's best to perform a calibration per the printer's instructions.
Thermal printers require special attention because they contain a heating element. Always unplug the device and ensure that it's cooled off before trying to clean it. Thermal printer cleaning cards, cleaning pens, and kits are widely available in the marketplace. If you need to remove any debris (from any printer), use compressed air or a specialized computer vacuum.
Printers won't complain if the weather outside is too hot or too cold, but they are susceptible to environmental issues. Here are some things to watch out for in your printer's environment:
Ozone Laser printers that use corona wires produce ozone as a by-product of the printing process. In offices, ozone can cause respiratory problems in small concentrations, and it can be seriously dangerous to people in large amounts. Ozone is also a very effective oxidizer and can cause damage to printer components.
Fortunately, laser printers don't produce large amounts of ozone, and most laser printers have an ozone filter. Ozone is another reason to ensure that your printer area has good ventilation. Also, replace the ozone filter periodically; check your printer's manual for recommendations on when to do this.
The printer market encompasses a dizzying array of products. You can find portable printers, photo printers, cheap black-and-white printers for under $30, high-end color laser printers for over $5,000, and everything in between. Most of the cheaper printers do not have upgrade options, but higher-end printers will have upgrade options, including memory, network cards, and firmware. Let's examine some ways that you can upgrade a slower printer or add functionality without breaking the bank.
When purchasing a memory upgrade for your printer, you need to make sure of two things. First, buy only memory that is compatible with your printer model. Most printers today use standard computer dual in-line memory modules (DIMMs), but check your manual or the manufacturer's website to be sure. If you're not sure, purchasing the memory through the manufacturer's website (or an authorized reseller) is a good way to go. Second, be sure that your printer is capable of a memory upgrade. It's possible that the amount of memory in your printer is at the maximum that it can handle.
Once you have obtained the memory, it's time to perform the upgrade. The specific steps required to install the memory will depend on your printer. Check the manual or the manufacturer's website for instructions tailored to your model.
Exercise 4.7 walks you through the general steps for installing memory into a laser printer.
FIGURE 4.49 Printer installable options
Many printers today have network capabilities, but not all do. Installing a NIC directly into a printer is an option on some devices. The NIC in a printer is similar to the NIC in a computer, with a couple of important differences. First, the NIC in a printer has a small processor on it to perform the management of the NIC interface (functions that the software on a host computer would do). This software is usually referred to as a print server, but be careful because that term can also refer to a physical computer that hosts many printers. Second, the NIC in a printer is proprietary, for the most part—that is, the same manufacturer makes the printer and the NIC.
When a person on the network prints to a printer with a NIC, they are printing right to the printer and not going through any third-party device (although in some situations, that is desirable and possible with NICs). Because of its dedicated nature, the NIC option installed in a printer makes printing to that printer faster and more efficient—that NIC is dedicated to receiving print jobs and sending printer status to clients.
Your manual is the best place to check to see if you can install a print server—internal ones look like regular expansion cards. Specific steps for installing the print server will also be in the manual or on the manufacturer's website. Generally speaking, it's very similar to installing a NIC into a computer. Figure 4.50 shows an internal HP print server.
FIGURE 4.50 HP print server expansion card
As with upgrading memory, methods to upgrade a printer's firmware depend on the model of printer. Most of the time, upgrading a printer's firmware is a matter of downloading and/or installing a free file from the manufacturer's website. Printer firmware upgrades are generally done from the machine hosting the printer (again, usually called the print server).
Firmware is usually upgraded for one of two reasons. One, if you are having compatibility issues, a firmware upgrade might solve them. Two, firmware upgrades can offer newer features that are not available on previous versions.
While we've covered some of the most important upgrades, most printers (especially laser printers) can be upgraded with additional capabilities as well. Each manufacturer, with the documentation for each printer, includes a list of all of the accessories, options, and upgrades available. The following options can be included on that list:
For a printer to print properly, the type style, or font, being printed must be downloaded to the printer along with the job being printed. Desktop publishing and graphic design businesses that print color pages on slower color printers are always looking for ways to speed up their print jobs, so they install multiple fonts into the onboard memory of the printer to make them printer-resident fonts. There's a problem, however: most printers have a limited amount of storage space for these fonts. To solve this problem, printer manufacturers made it possible for hard drives to be added to many printers. The hard drives can be used to store many fonts used during the printing process and are also used to store a large document file while it is being processed for printing.
One option that is popular in office environments is the addition of paper trays. Most laser and inkjet printers come with at least one paper tray (usually 500 sheets or fewer). The addition of a paper tray allows a printer to print more sheets between paper refills, thus reducing its operating cost. Also, some printers can accommodate multiple paper trays, which can be loaded with different types of paper, stationery, and envelopes. The benefit is that you can print a letter and an envelope from the same printer without having to leave your desk or change the paper in the printer.
Related to trays is the option of feeders. Some types of paper products need to be watched as they are printed to make sure that the printing happens properly. One example is envelopes: you usually can't put a stack of envelopes in a printer because they won't line up straight or they may get jammed. An accessory that you might add for this purpose is the envelope feeder. An envelope feeder typically attaches to the front of a laser printer and feeds in envelopes, one at a time. It can hold usually between 100 and 200 envelopes.
A printer's finisher does just what its name implies: it finishes the document being printed. It does this by folding, stapling, hole punching, sorting, or collating the sets of documents being printed into their final form. So rather than printing out a bunch of paper sheets and then having to collate and staple them, you can have the finisher do it. This particular option, while not cheap, is becoming more popular on laser printers to turn them into multifunction copiers. As a matter of fact, many copiers are now digital and can do all the same things that a laser printer can do but much faster and for a much cheaper cost per page.
In this chapter, we discussed how different types of printers work as well as the most common methods of connecting them to computers. You learned how computers use page-description languages to format data before they send it to printers and drivers to talk to them. You also learned about the various types of consumable supplies and how they relate to each type of printer.
The most basic category of printer currently in use is the impact printer. Impact printers form images by striking something against a ribbon, which in turn makes a mark on the paper. You learned how these printers work and the service concepts associated with them.
One of the most popular types of printer today is the inkjet printer, so named because of the mechanism used to put ink on the paper.
The most complex type of printer is the laser printer. The A+ 220-1101 exam covers this type of printer more than any other. You learned about the steps in the electrophotographic (EP) imaging process, the process that explains how laser printers print. We also explained the various components that make up this printer and how they work together.
3D printers are relatively new to the market. They're not printers in the sense that they put ink to paper. They're actually fabricators, which make 3D objects out of filament or resin.
You then learned about the interfaces used to connect printers to PCs and how to install and share a printer. Proper steps include connecting the device, installing the driver, configuring options, validating application and operating system compatibility, and educating users on how to use the device. Installing the device is the first step, but you're not done until you ensure that it works properly and that users know how to access it.
Installing network printers usually involves a few more steps than are needed to install local printers, and the device is connected to the network instead of to a host. Networked printers are often used for scan services, such as scanning to email, SMB, and the cloud. Security becomes critical here as well, so you should be familiar with user authentication, badging, secured prints, and audit logs.
Finally, we looked at how to perform printer maintenance, including the importance of using recommended supplies and various types of upgrades you can install in printers.
Know the differences between types of printer technologies (for example, laser, inkjet, thermal, impact). Laser printers use a laser and toner to create the page. Inkjet printers spray ink onto the page. Thermal printers use heat to form the characters on the page. Impact printers use a mechanical device to strike a ribbon, thus forming an image on the page.
Know the three most common ways to connect a printer. The methods are USB, Ethernet, and wireless.
Be familiar with printer configuration settings. Know how duplex, orientation, tray settings, and print quality are configured.
For networked printers, understand security and scan services options. Security can include user authentication and badging, audit logs, and secured prints. Network scan services include scan to email, scan to a folder (using the SMB protocol), and scan to cloud.
Understand the basics of how 3D printers create objects. 3D printers use filament or resin. It's most often a plastic composite but filament can be made of other material, such as aluminum or copper. 3D printers create objects by stacking thin layers of filament on top of each other.
Know how to install and configure printers. The basic procedure is as follows:
Know the seven steps in the laser imaging process. The seven steps are processing, charging, exposing, developing, transferring, fusing, and cleaning.
Know the key parts in a laser printer and appropriate maintenance procedures. Key parts are the imaging drum, fuser assembly, transfer belt, transfer roller, pickup rollers, separation pads, and duplexing assembly. Maintenance includes replacing toner, applying a maintenance kit, calibrating, and cleaning.
Know the key parts in an inkjet printer and appropriate maintenance procedures. Inkjet parts include the ink cartridge, print head, roller, feeder, duplexing assembly, and carriage belt. Maintenance items include cleaning heads, replacing cartridges, calibrating, and clearing paper jams.
Know the key components in a thermal printer and appropriate maintenance procedures. The feed assembly and heating element are important thermal printer parts. The paper is also important here because it's special heat-sensitive paper. Maintenance includes replacing paper, cleaning the heating element, and removing debris.
Know the key parts in an impact printer and appropriate maintenance procedures. Impact printer parts to know include the print head, ribbon, tractor feed (and special tractor feed paper), and impact paper. Maintenance includes replacing the ribbon, print head, and paper.
Understand the importance of using recommended supplies. Using consumables (paper, ink, toner) that are recommended for your printer is important. Using bad supplies could ruin your printer and void your warranty.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answer compares to the authors', refer to Appendix B.
Your network has several inkjet printers in use. A user is complaining that their documents are consistently printing with extra smudges along the lines of print on one of them. What steps would you take to clean the printer?
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Looking around most homes or offices today, it's hard to imagine a world without networks. Nearly every place of business has some sort of network. Wireless home networks have exploded in popularity in the last decade, and it seems that everywhere you go, you can see a dozen wireless networks from your smartphone, tablet, or laptop.
It didn't used to be that way. Even when not thinking about networks, we're still likely connected to one via the ubiquitous Internet-enabled smartphones in our pockets and purses. We take for granted a lot of what we have gained in technology over the past few years, much less the past several decades.
Thirty years ago, if you wanted to send a memo to everyone in your company, you had to use a photocopier and interoffice mail. Delivery to a remote office could take days. Today, one mistaken click of the Reply All button can result in instantaneous embarrassment. Email is an example of one form of communication that became available with the introduction and growth of networks.
This chapter focuses on the basic concepts of how a network works, including the way it sends information, the hardware used, and the common types of networks you might encounter. It used to be that in order to be a PC technician, you needed to focus on only one individual (but large) computer at a time. In today's environment, though, you will in all likelihood need to understand combinations of hardware, software, and network infrastructure in order to be successful.
Stand-alone personal computers, first introduced in the late 1970s, gave users the ability to create documents, spreadsheets, and other types of data and save them for future use. For the small-business user or home-computer enthusiast, this was great. For larger companies, however, it was not enough. Larger companies had greater needs to share information between offices and sometimes over great distances. Stand-alone computers were insufficient for the following reasons:
To address these problems, networks were born. A network links two or more computers together to communicate and share resources. Their success was a revelation to the computer industry as well as to businesses. Now departments could be linked internally to offer better performance and increase efficiency.
You have probably heard the term networking in a social or business context, where people come together and exchange names for future contact and access to more resources. The same is true with a computer network. A computer network enables computers to link to each other's resources. For example, in a network, every computer does not need a printer connected locally in order to print. Instead, you can connect a printer to one computer, or you can connect it directly to the network and allow all the other computers to access it. Because they allow users to share resources, networks can increase productivity as well as decrease cash outlay for new hardware and software.
In many cases, networking today has become a relatively simple plug-and-play process. Wireless network cards can automatically detect and join networks, and then you're seconds away from surfing the web or sending email. Of course, not all networks are that simple. Getting your network running may require a lot of configuration, and one messed-up setting can cause the whole thing to fail.
To best configure your network, there is a lot of information you should understand about how networks work. The following sections cover the fundamentals, and armed with this information, you can then move on to how to make it work right.
One of the ways to think about how networks are structured is to categorize them by network type. Some networks are small in scale whereas others span the globe. Some are designed to be wireless only, whereas others are designed specifically for storage. Understanding the basic structure of the network can often help you solve a problem. There are six different types of networks you need to be familiar with, and we'll cover them here:
The local area network (LAN) was created to connect computers in a single office or building. Expanding on that, a wide area network (WAN) includes networks outside the local environment and can also distribute resources across great distances. Generally, it's safe to think of a WAN as multiple dispersed LANs connected together. Today, LANs exist in many homes (wireless networks) and nearly all businesses. WANs are fairly common too, as businesses embrace mobility and more of them span greater distances. Historically, only larger corporations used WANs, but many smaller companies with remote locations now use them as well.
Having two types of network categories just didn't encompass everything that was out there, so the industry introduced several more terms: the personal area network, the metropolitan area network, the storage area network, and the wireless local area network. The personal area network (PAN) is a very small-scale network designed around one person within a very limited boundary area. The term generally refers to networks that use Bluetooth technology. On a larger scale is the metropolitan area network (MAN), which is bigger than a LAN but not quite as big as a WAN. A storage area network (SAN) is designed for optimized large-scale, long-term data storage. And the wireless local area network (WLAN) is like a LAN, only wireless. We'll cover all of these concepts in more detail in the following sections.
The 1970s brought us the minicomputer, which was a smaller version of large mainframe computers. Whereas the mainframe used centralized processing (all programs ran on the same computer), the minicomputer used distributed processing to access programs across other computers. As depicted in Figure 5.1, distributed processing allows a user at one computer to use a program on another computer as a backend to process and store information. The user's computer is the frontend, where data entry and minor processing functions are performed. This arrangement allowed programs to be distributed across computers rather than be centralized. This was also the first time network cables rather than phone lines were used to connect computers.
FIGURE 5.1 Distributed processing
By the 1980s, offices were beginning to buy PCs in large numbers. Portables were also introduced, allowing computing to become mobile. Neither PCs nor portables, however, were efficient in sharing information. As timeliness and security became more important, floppy disks were just not cutting it. Offices needed to find a way to implement a better means to share and access resources. This led to the introduction of the first type of PC local area network (LAN): ShareNet by Novell, which had both hardware and software components. LANs simply link computers in order to share resources within a closed environment. The first simple LANs were constructed a lot like the LAN shown in Figure 5.2.
FIGURE 5.2 A simple LAN
After the introduction of ShareNet, more LANs sprouted. The earliest LANs could not cover large distances. Most of them could only stretch across a single floor of the office and could support no more than 30 computers. Furthermore, they were still very rudimentary and only a few software programs supported them. The first software programs that ran on a LAN were not capable of being used by more than one user at a time. (This constraint was known as file locking.) Nowadays, multiple users often concurrently access a program or file. Most of the time, the only limitations will be restrictions at the record level if two users are trying to modify a database record at the same time.
By the late 1980s, networks were expanding to cover large geographical areas and were supporting thousands of users. The concept of a wide area network (WAN) was born. WANs were first implemented with mainframes at massive government expense, but started attracting PC users as networks went to this new level. Employees of businesses with offices across the country communicated as though they were only desks apart. Soon the whole world saw a change in the way of doing business, across not only a few miles but across countries. Whereas LANs are limited to single buildings, WANs can span buildings, states, countries, and even continental boundaries. Figure 5.3 shows an example of a simple WAN.
Generally speaking, it's safe to think of a WAN as multiple dispersed LANs connected together. Historically, only larger corporations used WANs, but many smaller companies with remote locations now use them as well. The networks of today and tomorrow are no longer limited by the inability of LANs to cover distance and handle mobility. WANs play an important role in the future development of corporate networks worldwide.
FIGURE 5.3 A simple WAN
In moving from LANs to WANs, we increased the scope. Going the other way, a personal area network (PAN) is going to be much smaller in scale. The term PAN is most commonly used in reference to Bluetooth networks. In 1998, a consortium of companies formed the Bluetooth Special Interest Group (SIG) and formally adopted the name Bluetooth for its technology. The name comes from a tenth-century Danish king named Harald Blåtand, known as Harold Bluetooth in English. (One can only imagine how he got that name.) King Blåtand had successfully unified warring factions in the areas of Norway, Sweden, and Denmark. The makers of Bluetooth were trying to unite disparate technology industries, namely computing, mobile communications, and the auto industry.
Current membership in the Bluetooth SIG includes Microsoft, Intel, Apple, IBM, Toshiba, and several cell phone manufacturers. The technical specification IEEE 802.15.1 describes a wireless personal area network (WPAN) based on Bluetooth version 1.1.
The first Bluetooth device on the market was an Ericsson headset and cell phone adapter, which arrived on the scene in 2000. While mobile phones and accessories are still the most common type of Bluetooth device, you will find many more, including wireless keyboards, mice, and printers. Figure 5.4 shows a Bluetooth USB adapter.
FIGURE 5.4 Bluetooth USB adapter
One of the defining features of a Bluetooth WPAN is its temporary nature. With traditional Wi-Fi, you need a central communication point, such as a wireless router or access point, to connect more than two devices together. (This is referred to as infrastructure.) Bluetooth networks are formed on an ad hoc basis, meaning that whenever two Bluetooth devices get close enough to each other, they can communicate directly with each other—no central communication point is required. This dynamically created network is called a piconet. A Bluetooth-enabled device can communicate with up to seven other devices in one piconet. Two or more piconets can be linked together in a scatternet. In a scatternet, one or more devices would serve as a bridge between the piconets.
For those networks that are larger than a LAN but confined to a relatively small geographical area, there is the term metropolitan area network (MAN). A MAN is generally defined as a network that spans a city or a large campus. For example, if a city decides to install wireless hotspots in various places, that network could be considered a MAN.
One of the questions a lot of people ask is, “Is there really a difference between a MAN and a WAN?” There is definitely some gray area here; in many cases they are virtually identical. Perhaps the biggest difference is who has responsibility for managing the connectivity. In a MAN, a central IT organization, such as the campus or city IT staff, is responsible. In a WAN, it's implied that you will be using publicly available communication lines, and there will be a phone company or other service provider involved.
A storage area network (SAN) is designed to do exactly what it says, which is to store information. Although a SAN can be implemented a few different ways, imagine a network (or network segment) that holds nothing but networked storage devices, whether they be network-attached storage (NAS) hard drives or servers with lots of disk space dedicated solely to storage. This network won't have client computers or other types of servers on it. It's for storage only. Figure 5.5 shows what a SAN could look like.
FIGURE 5.5 Storage area network (SAN)
Perhaps you're thinking, why would someone create a network solely for storage? It's a great question, and there are several benefits to having a SAN.
Block-level storage is
more efficient. This is getting into the weeds a bit, but
most SANs are configured to store and retrieve data in a system called
block storage. This contrasts with the file-based access
systems you're probably used to, such as the ones in Windows and macOS.
For anyone who has used a Windows-based or Mac computer, file storage is
instantly recognizable. It's based on the concept of a filing cabinet.
Inside the filing cabinet are folders, and files are stored within the
folders. Each file has a unique name when you include the folders and
subfolders it's stored in. For example, c:\files\doc1.txt
is different from
c:\papers\doc1.txt
. The hierarchical folder structure and
the naming scheme of file storage make it relatively easy for humans to
navigate. Larger data sets and multiple embedded folders can make it
trickier—who here hasn't spent 10 minutes trying to figure out which
folder they put that file in?—but it's still pretty
straightforward.
With file storage, each file is treated as its own singular entity, regardless of how small or large it is. With block storage, files are split into chunks of data of equal size, assigned a unique identifier, and then stored on the hard drive. Because each piece of data has a unique address, a file structure is not needed. Figure 5.6 illustrates what this looks like.
FIGURE 5.6 Block storage
Block storage allows a file to be broken into more manageable chunks rather than being stored as one entity. This allows the operating system to modify one portion of a file without needing to open the entire file. In addition, since data reads and writes are always of the same block size, data transfers are more efficient and therefore faster. Latency with block storage is lower than with other types of storage.
One of the first common use cases for block storage was for databases, and it remains the best choice for large, structured databases today. Block storage is also used for storage area networks (SANs).
The downsides to SANs are that they are a bit complicated to set up and can be more expensive to run than non-SAN storage solutions. For huge networks that need to get data to large numbers of users, though, they're a good choice.
Wireless networks are everywhere today. If you use your smartphone, tablet, or laptop to look for wireless networks, chances are you will find several. A wireless local area network (WLAN) is simply a LAN, but one in which clients connect wirelessly rather than through network cables.
Wireless clients on a network typically access the network through a wireless access point (WAP). The WAP may connect wirelessly to another connectivity device, such as a wireless router, but more likely uses a wired connection to a router or switch. (We'll talk about all of these devices later in the chapter.)
Technically speaking, two or more computers connected together constitute a network. But networks are rarely that simple. When you're looking at the devices or resources available on a network, there are three types of components of which you should be aware:
Servers come in many shapes and sizes. They are a core component of the network, providing a link to the resources necessary to perform any task. The link that the server provides could be to a resource existing on the server itself or to a resource on a client computer. The server is the critical enabler, offering directions to the client computers regarding where to go to get what they need.
Servers offer networks the capability of centralizing the control of resources and security, thereby reducing administrative difficulties. They can be used to distribute processes for balancing the load on computers and can thus increase speed and performance. They can also compartmentalize files for improved reliability. That way, if one server goes down, not all of the files are lost.
Servers can perform several different critical roles on a network. For example, a server that provides files to the users on the network is called a file server. Likewise, one that hosts printing services for users is called a print server. Servers can be used for other tasks as well, such as authentication, remote access services, administration, email, and so on. Networks can include multipurpose and single-purpose servers. A multipurpose server can be, for example, both a file server and a print server at the same time. If the server is a single-purpose server, it is a file server only or a print server only. Another distinction we use in categorizing servers is whether they are dedicated or nondedicated:
Nondedicated Servers Nondedicated servers are assigned to provide one or more network services and local access. A nondedicated server is expected to be slightly more flexible in its day-to-day use than a dedicated server. Nondedicated servers can be used to direct network traffic and perform administrative actions, but they also are often used to serve as a frontend for the administrator to work with other applications or services or to perform services for more than one network. For example, a dedicated web server might serve out one or more websites, whereas a nondedicated web server serves out websites but might also function as a print server on the local network or as the administrator's workstation.
The nondedicated server is not what some would consider a true server, because it can act as a workstation as well as a server. The workgroup server at your office is an example of a nondedicated server. It might be a combination file, print, and email server. Plus, because of its nature, a nondedicated server could also function well in a peer-to-peer environment. It could be used as a workstation in addition to being a file, print, and email server.
Many networks use both dedicated and nondedicated servers to incorporate the best of both worlds, offering improved network performance with the dedicated servers and flexibility with the nondedicated servers.
Workstations are the computers on which the network users do their work, performing activities such as word processing, database design, graphic design, email, and other office or personal tasks. A workstation is basically an everyday computer, except for the fact that it is connected to a network that offers additional resources. Workstations can range from diskless computer systems to desktops or laptops. In network terms, workstations are also known as client computers. As clients, they are allowed to communicate with the servers in the network to use the network's resources.
It takes several items to make a workstation into a network client. You must install a network interface card (NIC), a special expansion card that allows the PC to talk on a network. You must connect it to a cabling system that connects to other computers (unless your NIC supports wireless networking). And you must install special software, called client software, which allows the computer to talk to the servers and request resources from them. Once all this has been accomplished, the computer is “on the network.” We'll cover more details on how NICs work and how to configure them in the “Network Interface Cards” section later in this chapter.
To the client, the server may be nothing more than just another drive letter. However, because it is in a network environment, the client can use the server as a doorway to more storage or more applications or to communicate with other computers or other networks. To users, being on a network changes a few things:
We now have the server to share the resources and the workstation to use them, but what about the resources themselves? A resource (as far as the network is concerned) is any item that can be used on a network. Resources can include a broad range of items, but the following items are among the most important:
When only a few printers (and all the associated consumables) have to be purchased for the entire office, the costs are dramatically lower than the costs for supplying printers at every workstation.
Networks also give users more storage space to store their files. Client computers can't always handle the overhead involved in storing large files (for example, database files) because they are already heavily involved in users' day-to-day work activities. Because servers in a network can be dedicated to only certain functions, a server can be allocated to store all of the larger files that are used every day, freeing up disk space on client computers. In addition, if users store their files on a server, the administrator can back up the server periodically to ensure that if something happens to a user's files, those files can be recovered.
Files that all users need to access (such as emergency contact lists and company policies) can also be stored on a server. Having one copy of these files in a central location saves disk space, as opposed to storing the files locally on everyone's system.
Applications (programs) no longer need to be on every computer in the office. If the server is capable of handling the overhead that an application requires, the application can reside on the server and be used by workstations through a network connection. Apps can also be cloud based, which basically means they will reside on a computer somewhere on the Internet. We cover cloud computing in Chapter 8.
PCs use a disk operating system that controls the filesystem and how the applications communicate with the hard disk. Networks use a network operating system (NOS) to control the communication with resources and the flow of data across the network. The NOS runs on the server. Some of the more popular NOSs are Linux, Microsoft's Windows Server series (Server 2022, Server 2019, and so on), and macOS Server. Several other companies offer network operating systems as well.
We have discussed two major components of a typical network—servers and workstations—and we've also talked briefly about network resources. Let's dive a bit deeper into how those resources are accessed on a network.
There are generally two resource access models: peer-to-peer and client-server. It is important to choose the appropriate model. How do you decide which type of resource model is needed? You must first think about the following questions:
Networks cannot just be put together at the drop of a hat. A lot of planning is required before implementation of a network to ensure that whatever design is chosen will be effective and efficient, and not just for today but for the future as well. The forethought of the designer will lead to the best network with the least amount of administrative overhead. In each network, it is important that a plan be developed to answer the previous questions. The answers will help the designer choose the type of resource model to use.
In a peer-to-peer network, the computers act as both service providers and service requestors. An example of a peer-to-peer resource model is shown in Figure 5.7.
FIGURE 5.7 The peer-to-peer resource model
The peer-to-peer model is great for small, simple, inexpensive networks. This model can be set up almost instantly, with little extra hardware required. Many versions of Windows (Windows 11, Windows 10, and others) as well as Linux and macOS are popular operating system environments that support the peer-to-peer resource model. Peer-to-peer networks are also referred to as workgroups.
Generally speaking, there is no centralized administration or control in the peer-to-peer resource model. Every workstation has unique control over the resources that the computer owns, and each workstation must be administered separately. However, this very lack of centralized control can make administering the network difficult; for the same reason, the network isn't very secure. Each user needs to manage separate passwords for each computer on which they wish to access resources, as well as set up and manage the shared resources on their own computer. Moreover, because each computer is acting as both a workstation and a server, it may not be easy to locate resources. The person who is in charge of a file may have moved it without anyone's knowledge. Also, the users who work under this arrangement need more training because they are not only users but also administrators.
Will this type of network meet the needs of the organization today and in the future? Peer-to-peer resource models are generally considered the right choice for small companies that don't expect future growth. Small companies that expect growth, on the other hand, should not choose this type of model.
The client-server model (also known as server-based model) is better than the peer-to-peer model for large networks (say, more than 10 computers) that need a more secure environment and centralized control. Server-based networks use one or more dedicated, centralized servers. All administrative functions and resource sharing are performed from this point. This makes it easier to share resources, perform backups, and support an almost unlimited number of users.
This model also offers better security than the peer-to-peer model. However, the server needs more hardware than a typical workstation/server computer in a peer-to-peer resource model. In addition, it requires specialized software (the NOS) to manage the server's role in the environment. With the addition of a server and the NOS, server-based networks can easily cost more than peer-to-peer resource models. However, for large networks, it's the only choice. An example of a client-server resource model is shown in Figure 5.8.
FIGURE 5.8 The client-server resource model
Server-based networks are often known as domains. The key
characteristic of a server-based network is that security is centrally
administered. When you log into the network, the login request is passed
to the server responsible for security, sometimes known as a domain
controller. (Microsoft uses the term domain controller,
whereas other vendors of server products do not.) This is different from
the peer-to-peer model, where each individual workstation validates
users. In a peer-to-peer model, if the user jsmith
wants to
be able to log into different
workstations, they need to have a user account set up on each machine.
This can quickly become an administrative nightmare! In a domain, all
user accounts are stored on the server. User jsmith
needs
only one account and can log into any of the workstations in the
domain.
Client-server resource models are the desired models for companies that are continually growing, that need to support a large environment, or that need centralized security. Server-based networks offer the flexibility to add more resources and clients almost indefinitely into the future. Hardware costs may be higher, but with the centralized administration, managing resources becomes less time consuming. Also, only a few administrators need to be trained, and users are responsible for only their own work environment.
Whatever you decide, always take the time to plan your network before installing it. A network is not something you can just throw together. You don't want to find out a few months down the road that the type of network you chose does not meet the needs of the company—this could be a time-consuming and costly mistake.
A topology is a way of physically laying out the network. When you plan and install a network, you need to choose the right topology for your situation. Each type differs from the others by its cost, ease of installation, fault tolerance (how the topology handles problems such as cable breaks), and ease of reconfiguration (such as adding a new workstation to the existing network).
There are five primary topologies:
Each topology has advantages and disadvantages. Table 5.1 summarizes the advantages and disadvantages of each topology, and then we will go into more detail about each one.
Topology | Advantages | Disadvantages |
---|---|---|
Bus | Cheap. Easy to install. | Difficult to reconfigure. A break in the bus disables the entire network. |
Star | Cheap. Very easy to install and reconfigure. More resilient to a single cable failure. | More expensive than bus. |
Ring | Efficient. Easy to install. | Reconfiguration is difficult. Very expensive. |
Mesh | Best fault tolerance. | Reconfiguration is extremely difficult, extremely expensive, and very complex. |
Hybrid | Gives a combination of the best features of each topology used. | Complex (less so than mesh, however). |
TABLE 5.1 Topologies—advantages and disadvantages
A bus topology is the simplest. It consists of a single cable that runs to every workstation, as shown in Figure 5.9. This topology uses the least amount of cabling. Each computer shares the same data and address path. With a bus topology, messages pass through the trunk, and each workstation checks to see if a message is addressed to it. If the address of the message matches the workstation's address, the network adapter retrieves it. If not, the message is ignored.
FIGURE 5.9 The bus topology
Cable systems that use the bus topology are easy to install. You run a cable from the first computer to the last computer. All of the remaining computers attach to the cable somewhere in between. Because of the simplicity of installation, and because of the low cost of the cable, bus topology cabling systems are the cheapest to install.
Although the bus topology uses the least amount of cabling, it is difficult to add a workstation. If you want to add another workstation, you have to reroute the cable completely and possibly run two additional lengths of it. Also, if any one of the cables breaks, the entire network is disrupted. Therefore, such a system is expensive to maintain and can be difficult to troubleshoot. You will rarely run across physical bus networks in use today.
A star topology (also called a hub-and-spoke topology) branches each network device off a central device called a hub or a switch, making it easy to add a new workstation. If a workstation goes down, it does not affect the entire network; if the central device goes down, the entire network goes with it. Because of this, the hub (or switch) is called a single point of failure. Figure 5.10 shows a simple star network.
FIGURE 5.10 The star topology
Star topologies are very easy to install. A cable is run from each workstation to the switch. The switch is placed in a central location in the office (for example, a utility closet). Star topologies are more expensive to install than bus networks because several more cables need to be installed, plus the switches. But the ease of reconfiguration and fault tolerance (one cable failing does not bring down the entire network) far outweigh the drawbacks. This is by far the most commonly installed network topology in use today.
In a ring topology, each computer connects to two other computers, joining them in a circle and creating a unidirectional path where messages move from workstation to workstation. Each entity participating in the ring reads a message and then regenerates it and hands it to its neighbor on a different network cable. See Figure 5.11 for an example of a ring topology.
FIGURE 5.11 The ring topology
The ring makes it difficult to add new computers. Unlike a star topology network, a ring topology network will go down if one entity is removed from the ring. Physical ring topology systems rarely exist anymore, mainly because the hardware involved was fairly expensive and the fault tolerance was very low.
The mesh topology is the most complex in terms of physical design. In this topology, each device is connected to every other device (see Figure 5.12). This topology is rarely found in wired LANs, mainly because of the complexity of the cabling. If there are x computers, there will be (x × (x – 1)) ÷ 2 cables in the network. For example, if you have five computers in a mesh network, it will use (5 × (5 – 1)) ÷ 2 = 10 cables. This complexity is compounded when you add another workstation. For example, your 5-computer, 10-cable network will jump to 15 cables if you add just one more computer. Imagine how the person doing the cabling would feel if you told them they had to cable 50 computers in a mesh network—they'd have to come up with (50 × (50 – 1)) ÷ 2 = 1,225 cables! (Not to mention figuring out how to connect them all.)
FIGURE 5.12 The mesh topology
Because of its design, the physical mesh topology is expensive to install and maintain. Cables must be run from each device to every other device. The advantage you gain is high fault tolerance. With a mesh topology, there will always be a way to get the data from source to destination. The data may not be able to take the direct route, but it can take an alternate, indirect route. For this reason, the mesh topology is often used to connect multiple sites across WAN links. It uses devices called routers to search multiple routes through the mesh and determine the best path. However, the mesh topology does become inefficient with five or more entities because of the number of connections that need to be maintained.
The hybrid topology is simply a mix of the other topologies. It would be impossible to illustrate it because there are many combinations. In fact, most networks today are not only hybrid but heterogeneous. (They include a mix of components of different types and brands.) The hybrid network may be more expensive than some types of network topologies, but it takes the best features of all the other topologies and exploits them.
Table 5.1, earlier in this chapter, summarizes the advantages and disadvantages of each type of network topology.
Regardless of the type of network you choose to implement, the computers on that network need to know how to talk to each other. To facilitate communication across a network, computers use a common language called a protocol. We'll cover protocols more in Chapter 6, “Introduction to TCP/IP,” but essentially they are languages much like English is a language. Within each language, there are rules that need to be followed so that all computers understand the right communication behavior.
To use a human example, within English there are grammar rules. If you put a bunch of English words together in a way that doesn't make sense, no one will understand you. If you just decide to omit verbs from your language, you're going to be challenged to get your point across. And if everyone talks at the same time, the conversation can be hard to follow.
Computers need standards to follow to keep their communication clear. Different standards are used to describe the rules that computers need to follow to communicate with each other. The most important communication framework, and the backbone of all networking, is the OSI model.
The International Organization for Standardization (ISO) published the Open Systems Interconnection (OSI) model in 1984 to provide a common way of describing network protocols. The ISO put together a seven-layer model providing a relationship between the stages of communication, with each layer adding to the layer above or below it.
Here's how the theory behind the OSI model works. As a transmission takes place, the higher layers pass data through the lower layers. As the data passes through a layer, that layer tacks its information (also called a header) onto the beginning of the information being transmitted until it reaches the bottom layer. A layer may also add a trailer to the end of the data. The bottom layer sends the information out on the wire (or in the air, in the case of wireless).
At the receiving end, the bottom layer receives and reads the information in the header, removes the header and any associated trailer related to its layer, and then passes the remainder to the next highest layer. This procedure continues until the topmost layer receives the data that the sending computer sent.
The OSI model layers are listed here from top to bottom, with descriptions of what each of the layers is responsible for:
Figure 5.13 shows the complete OSI model. Note the relationship of each layer to the others and the function of each layer.
FIGURE 5.13 The OSI model
Continuing with our theme of communication, it's time to introduce one final group of standards. You've already learned that a protocol is like a language; think of the IEEE 802 standards as syntax, or the rules that govern who communicates, when they do it, and how they do it.
The Institute of Electrical and Electronics Engineers (IEEE) formed a subcommittee to create standards for network types. These standards specify certain types of networks, although not every network protocol is covered by the IEEE 802 committee specifications. This model contains several standards. The ones commonly in use today are 802.3 CSMA/CD (Ethernet) LAN and 802.11 Wireless networks. The IEEE 802 standards were designed primarily for enhancements to the bottom three layers of the OSI model. The IEEE 802 standard breaks the Data Link layer into two sublayers: a Logical Link Control (LLC) sublayer and a Media Access Control (MAC) sublayer. The Logical Link Control sublayer manages data link communications. The Media Access Control sublayer watches out for data collisions and manages physical addresses, also referred to as MAC addresses.
You've most likely heard of 802.11ax (Wi-Fi 6), 802.11ac (Wi-Fi 5), or 802.11n wireless networking. The rules for communicating with all versions of 802.11 are defined by the IEEE standard. Another very well-known standard is 802.3 CSMA/CD. You might know it by its more popular name, Ethernet.
The original 802.3 CSMA/CD standard defines a bus topology network that uses a 50-ohm coaxial baseband cable and carries transmissions at 10 Mbps. This standard groups data bits into frames and uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) cable access method to put data on the cable. Currently, the 802.3 standard has been amended to include speeds up to 400 Gbps over multimode fiber-optic cable.
Breaking the CSMA/CD acronym apart may help illustrate how it works:
The CSMA/CD technology is considered a contention-based access method.
The only major downside to 802.3 is that with large networks (more than 100 computers on the same segment), the number of collisions increases to the point where more collisions than transmissions are taking place.
We have looked at the types of networks, network topologies, and the way communications are handled. That's all of the logical stuff. To really get computers to talk to each other requires hardware. Every computer on the network needs to have a network adapter of some type. In many cases, you also need some sort of cable to hook them together. (Wireless networking is the exception, but at the backend of a wireless network there are still components wired together.) And finally, you might also need connectivity devices to attach several computers or networks to each other. We'll look at all of these in the following sections, starting with the component closest in to the “local computer,” or the one you're at, and working outward.
You were introduced to the network interface card (NIC), also referred to as a network adapter card, earlier in the chapter. It provides the physical interface between computer and cabling, and prepares data, sends data, and controls the flow of data. It can also receive and translate data into bytes for the CPU to understand. NICs come in many shapes and sizes.
Different NICs are distinguished by the PC bus type and the network for which they are used. The following sections describe the role of NICs and how to evaluate them.
The first thing you need to determine is whether the NIC will fit the bus type of your PC. If you have more than one type of bus in your PC (for example, a combination PCI/PCI Express), use a NIC that fits into the fastest type (the PCI Express, in this case). This is especially important in servers because the NIC can quickly become a bottleneck if this guideline isn't followed.
More and more computers are using NICs that have USB interfaces. For the rare laptop computer that doesn't otherwise have a NIC built into it, these small portable cards are very handy.
The most important goal of the NIC is to optimize network performance and minimize the amount of time needed to transfer data packets across the network. The key is to ensure that you get the fastest card that you can for the type of network that you're on. For example, if your wireless network supports 802.11g/n/ac/ax, make sure to get an 802.11ax card because it's the fastest.
In order for two computers to send and receive data, the cards must agree on several things:
If the cards can agree, the data is sent successfully. If the cards cannot agree, the data is not sent.
To send data on the network successfully, all NICs need to use the same media access method (such as CSMA/CD) and be connected to the same piece of cable. This usually isn't a problem, because the vast majority of network cards sold today are Ethernet.
In addition, NICs can send data using either full-duplex or half-duplex mode. Half-duplex communication means that between the sender and receiver, only one of them can transmit at any one time. In full-duplex communication, a computer can send and receive data simultaneously. The main advantage of full-duplex over half-duplex communication is performance. NICs (Gigabit Ethernet NICs) can operate twice as fast (1 Gbps) in full-duplex mode as they do normally in half-duplex mode (500 Mbps). In addition, collisions are avoided, which speeds up performance as well. Configuring the network adapter's duplexing setting is done from the Advanced tab of the NIC's properties, as shown in Figure 5.14.
FIGURE 5.14 A NIC's Speed & Duplex setting
Each card must have a unique hardware address, called a Media Access Control address or MAC address. (Remember earlier in the chapter we said you didn't need to know the OSI model for the exam, but you should know it anyway? Here's an example of why. Now you can piece together that this is a physical address that is referenced at Layer 2.) If two NICs on the same network have the same hardware address, neither one will be able to communicate. For this reason, the IEEE has established a standard for hardware addresses and assigns blocks of these addresses to NIC manufacturers, who then hardwire the addresses into the cards.
MAC addresses are 48 bits long and written in hexadecimal, such as
40-61-86-E4-5A-9A
. An example is shown in Figure
5.15 from the output of the ipconfig /all
command
executed at the Windows command prompt. On a Mac or in Linux, the
analogous command is ifconfig
.
FIGURE 5.15 Physical (MAC) address
In order for the computer to use the NIC, it is very important to install the proper device drivers. These drivers are pieces of software that communicate directly with the operating system, specifically the network redirector and adapter interface. Drivers are specific to each NIC and operating system, and they operate in the Media Access Control (MAC) sublayer of the Data Link layer of the OSI model.
To see which version the driver is, you need to look at the device's properties. There are several ways to do this. A common one is to open Device Manager (click Start, type Device, and click Device Manager under Best Match), and find the device, as shown in Figure 5.16.
FIGURE 5.16 Device Manager
Right-click the device, click Properties, and then go to the Driver tab, as shown in Figure 5.17. Here you can see a lot of information about the driver, update it, or roll it back if you installed a new one and it fails for some reason. You can also update the driver by right-clicking the device in Device Manager and choosing Update Driver from the menu.
FIGURE 5.17 NIC properties Driver tab
When the data is passing through the OSI model and reaches the Physical layer, it must find its way onto the medium that is used to transfer data physically from computer to computer. This medium is called the cable (or in the case of wireless networks, the air). It is the NIC's role to prepare the data for transmission, but it is the cable's role to move the data properly to its intended destination. The following sections discuss the three main types of physical cabling: coaxial, twisted pair, and fiber-optic. (Wireless communication is covered in Chapter 7.)
Coaxial cable (or coax) contains a center conductor core made of copper, which is surrounded by a plastic jacket with a braided shield over it (as shown in Figure 5.18). Either Teflon or a plastic coating covers this metal shield.
FIGURE 5.18 Coaxial cable
Common network cables are covered with a plastic called polyvinyl chloride (PVC). Although PVC is flexible, fairly durable, and inexpensive, it has a nasty side effect in that it produces poisonous gas when burned. An alternative is a Teflon-type covering that is frequently referred to as a plenum-rated coating. That simply means that the coating does not produce toxic gas when burned and is rated for use in the ventilation plenum areas in a building that circulate breathable air, such as air-conditioning and heating systems. This type of cable is more expensive, but it may be mandated by electrical code whenever cable is hidden in walls or ceilings.
Coaxial cable is available in various specifications that are rated according to the Radio Guide (RG) system, which was originally developed by the U.S. military. The thicker the copper, the farther a signal can travel—and with that comes a higher cost and a less flexible cable. Coax is uncommonly seen in computer networking today because it's painfully slow; its heyday was a few decades ago.
When coax cable was popular for networking, there were two standards that had high use: RG-8 (thicknet) and RG-58A/U (thinnet). Thicknet had a maximum segment distance of 500 meters and was used primarily for network backbones. Thinnet was more often used in a conventional physical bus. A thinnet segment could span 185 meters. Both thicknet and thinnet had impedance of 50 ohms. Table 5.2 shows the different types of RG cabling and their uses. Although coax is an A+ exam objective, no specific coax cabling standards are currently specified on the exam objectives. The ones that used to be named on the A+ exam objectives were RG-6 and RG-59.
RG # | Popular name | Ethernet implementation | Type of cable |
---|---|---|---|
RG-6 | Satellite/cable TV, cable modems | N/A | Solid copper |
RG-8 | Thicknet | 10Base5 | Solid copper |
RG-58 U | N/A | None | Solid copper |
RG-58 A/U | Thinnet | 10Base2 | Stranded copper |
RG-59 | Cable television | N/A | Solid copper |
TABLE 5.2 Coax RG types
Coaxial networking has all but gone the way of the dinosaur. The only two coaxial cable types you might see today are RG-6 and RG-59. Of the two, RG-6 has a thicker core (1.0 mm), can run longer distances (up to 304 meters, or 1,000 feet), and can support digital signals. RG-59 (0.762 mm core) is considered adequate for analog cable TV but not digital and has a maximum distance of about 228 meters (750 feet). The maximum speed for each depends on the quality of the cable and the standard on which it's being used. Both have impedance of 75 ohms.
Thicknet was a bear to use. Not only was it highly inflexible, but you also needed to use a connector called a vampire tap. A vampire tap is so named because a metal tooth sinks into the cable, thus making the connection with the inner conductor. The tap is connected to an external transceiver that in turn has a 15-pin AUI connector (also called a DIX or DB-15 connector) to which you attach a cable that connects to the station. The transceiver is shown in Figure 5.19. On the right side, you will see the thicknet cable running through the portion of the unit that contains the vampire tap. DIX got its name from the companies that worked on this format—Digital, Intel, and Xerox.
Thinnet coax was much easier to use. Generally, thinnet cables used a BNC connector (see Figure 5.20) to attach to a T-shaped connector that attached to the workstation. The other side of the T-connector would either continue on with another thinnet segment or be capped off with a terminator. It is beyond the scope of this book to settle the long-standing argument over the meaning of the abbreviation BNC. We have heard Bayonet Connector, Bayonet Nut Connector, and British Naval Connector—among others. What is relevant is that the BNC connector locks securely with a quarter-twist motion.
FIGURE 5.19 Thicknet transceiver and cable inside a vampire tap
Thicknet transceiver licensed Under CC BY-Sa 2.5 via Wikimedia
Commons. http://commons.wikimedia.org/wiki/File:ThicknetTransceiver.jpg#/media/File:ThicknetTransceiver.jpg
FIGURE 5.20 Male and female BNC connectors, T-connector, and terminator
Another type of connector that you will see in use with coax is a splitter. As its name implies, a splitter takes a single signal (say that three times fast) and splits it into multiple replicas of the same signal. You might use this for cable TV—one line may run into your house, but the signal ultimately needs to get split for three televisions. This type of configuration will work for cable TV or cable Internet. Figure 5.21 shows a one-to-two coax splitter. You can also buy splitters that split one input into three or more outputs.
FIGURE 5.21 A coax splitter
Keep in mind that a coax signal is designed to go from one sender to one receiver, so splitting it can cause some issues. Splitting the signal causes it to weaken, meaning that signal quality could be lower, and it might not travel the same distance as a non-split signal. To avoid problems, don't over-split the cable, and purchase a good-quality or amplified splitter.
The last type of coax connector we will cover is called an F-connector (referred to in exam objectives as an F type connector, shown in Figure 5.22), and it is used with cable TV. You'll see it on the end of an RG-6 or possibly an RG-59 cable. The exposed end of the copper cable is pushed into the receptacle, and the connector is threaded so that it can screw into place.
FIGURE 5.22 An F-connector
Twisted pair is the most popular type of cabling to use because of its flexibility and low cost. It consists of several pairs of wire twisted around each other within an insulated jacket, as shown in Figure 5.23.
FIGURE 5.23 Unshielded twisted pair cable
There are two different types of twisted pair cables: shielded twisted pair (STP) and unshielded twisted pair (UTP). Both types of cable have two or four pairs of twisted wires going through them. The difference is that STP has an extra layer of braided foil shielding surrounding the wires to decrease electrical interference, as shown in Figure 5.24. (In Figure 5.24, the individual wire pairs are shielded as well.) UTP has a PVC or plenum coating but no outer foil shield to protect it from interference. In the real world, UTP is the most common networking cable type used. STP has been used less frequently, but the newer Cat 7 and Cat 8 standards rely on shielding and offer higher frequencies to deliver ultra-fast transmission speeds.
FIGURE 5.24 Shielded twisted pair cable
Twisted pair cabling has been in use, at least with old analog telephone lines, for a few generations now. Over time, the need for higher transmission speeds required faster cabling, and the cable manufacturers have been up to the challenge. Now you can find twisted pair in several grades to offer different levels of performance and protection against electrical interference:
For as long as twisted pair has existed, every technician has needed to memorize its standard maximum transmission distance of 100 meters (328 feet). You should burn that into your brain, too. Note, however, that some newer standards have shorter maximum distances. For example, if you want to run 10GBaseT over Cat 6, you won't get that much distance—about 55 meters under ideal conditions. Cat 8 (which isn't an exam objective) can provide up to 40 Gbps but only at 30 meters.
Twisted pair cabling uses a connector type called an RJ (registered jack) connector. You are probably familiar with RJ connectors. Most landline phones connect with an RJ-11 connector. The connector used with UTP cable is called RJ-45. The RJ-11 has room for two pairs (four wires), and the RJ-45 has room for four pairs (eight wires).
In almost every case, UTP uses RJ connectors; a crimper is used to attach an RJ connector to a cable. Higher-quality crimping tools have interchangeable dies for both types of connectors. (Crimpers are discussed in Chapter 12, “Hardware and Network Troubleshooting.”) Figure 5.25 shows an RJ-11 connector and an RJ-45 connector.
You will also find RJ-45 splitters (often called Ethernet splitters) in the marketplace. The idea is similar to a coax splitter, but functionally they are very different. Coax signals are carried over one wire, while twisted pair uses either two pairs of wires (for 100 Mbps or slower connections) or all four pairs of wires (for Gigabit Ethernet and faster). An Ethernet splitter will take the incoming signal on two pairs and then split it, so on the output end it produces two sets of signals using two pairs each. Because of this, Ethernet splitters are limited to 100 Mbps connections.
FIGURE 5.25 RJ-11 and RJ-45 connectors
Some twisted pair installations don't use standard RJ-45 connectors. Instead, the cable is run to a central panel called a punchdown block, often located in a server room or connectivity closet. In a punchdown block, the metal wires are connected directly to the block to make the connection. Instead of a crimper, a punchdown tool is used. Figure 5.26 shows a closeup of wires connected to an older-style 66 block, frequently used in analog telephone communications. Networks that use blocks today are more likely to use a 110 block, which has a higher density of connectors and is designed to reduce crosstalk between cables.
Twisted pair cables are unique in today's network environment in that they use multiple physical wires. Those eight wires need to be in the right places in the RJ-45 connector or it's very likely that the cable will not work properly. To ensure consistency in the industry, two standards have been developed: T568A and T568B.
Older implementations using UTP used only two pairs of wires, and those two pairs were matched to pins 1, 2, 3, and 6 in the connector. Newer applications such as Voice over IP (VoIP) and Gigabit Ethernet use all four pairs of wires, so you need to make sure that they're all where they're supposed to be.
FIGURE 5.26 Cables in a punchdown block
By Z22 - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=34324171
If you're creating a regular network patch cable to connect a computer to a hub or switch, both sides need to have the same pinout. For that, follow either the T568A standard shown in Figure 5.27 or the T568B standard shown in Figure 5.28. Although there are no differences in terms of how the standards perform, some companies prefer one to the other.
FIGURE 5.27 T568A standard
If you are going to create a cable to connect a computer to another computer directly, or you're going to make a connection from hub to hub, switch to switch, hub to switch, or a computer directly to a router, then you need what's called a crossover cable. In a crossover cable, pin 1 to pin 3 and pin 2 to pin 6 are crossed on one side of the cable only. This is to get the “send” pins matched up with the “receive” pins on the other side, and vice versa. For easier visualization, look at Figure 5.28.
FIGURE 5.28 T568B standard
The key thing to remember is that a patch (straight-through) cable is the same on both ends. A crossover cable is different on each end. You should know the order of the colors for both standards.
Occasionally you will run into situations where network cable needs to be run outside or buried underground. For these types of installations, use direct burial cable. Direct burial cable is STP with an extra waterproof sheathing.
Whenever you run cables in an area where they can be stepped on (and it's not recommended you do), be aware that no amount of shielding will totally protect the cable from damage. It's best to use a cable guard of some sort to provide protection. We can't count the number of times we've seen people use duct tape or something similar to keep a cable in a high-traffic area from moving around—don't do it. The tape will work to keep it in place, but does nothing to protect the cable.
An alternative may be to bury the cable underground. The recommended distance is 6" to 8" below the ground, and away from any lines that carry electrical current. Also, it's recommended that you use a conduit, such as PVC pipe, to further protect the cable.
Fiber-optic cabling has been called one of the best advances in cabling. It consists of a thin, flexible glass or plastic fiber surrounded by a rubberized outer coating (see Figure 5.29). It provides transmission speeds from 100 Mbps to 10 Gbps over a maximum distance of several miles. Because it uses pulses of light instead of electric voltages to transmit data, it is immune to electrical interference and to wiretapping.
FIGURE 5.29 Fiber-optic cable
Optical fiber cable by Buy_on_turbosquid_optical.jpg: Cable master
derivative work: Srleffler (talk) - Buy_on_turbosquid_optical.jpg http://commons.wikimedia.org/wiki/File:Optical_fiber_cable.jpg#/media/File:Optical_fiber_cable.jpg
While it's gaining ground rapidly, fiber-optic cable is still not as popular as UTP for local area networks, however, because of its high cost of installation. Fiber-optic cabling is great for networks that need extremely fast transmission rates or transmissions over long distances or in networks that have had problems with electrical interference in the past. Fiber is also becoming more common as backbones to telecommunication systems, and in many places fiber-optic cables can be used to deliver high-speed Internet connections to businesses and homes. We'll talk more about this in Chapter 7.
Fiber-optic cable comes in two varieties: single-mode or multimode. The term mode refers to the bundles of light that enter the fiber-optic cable. Single-mode fiber (SMF) cable uses only a single mode (or path) of light to propagate through the fiber cable, whereas multimode fiber (MMF) allows multiple modes of light to propagate simultaneously. In multimode fiber-optic cable, the light bounces off the cable walls as it travels through the cable, which causes the signal to weaken more quickly.
Multimode fiber is most often used as horizontal cable. It permits multiple modes of light to propagate through the cable, which shortens cable distances but delivers more available bandwidth. Devices that use MMF cable typically use light-emitting diodes (LEDs) to generate the light that travels through the cable; however, lasers with multimode fiber-optic cable are now being used in higher-bandwidth network devices, such as Gigabit Ethernet. MMF can transmit up to 10 Gbps for up to 550 meters (1,804 feet, or just over one-third of a mile), depending on the standard used.
Single-mode fiber cable is commonly used as backbone cabling. It is also usually the cable type used in phone systems. Light travels through single-mode fiber-optic cable using only a single mode, meaning that it travels straight down the fiber and does not bounce off the cable walls. Because only a single mode of light travels through the cable, single-mode fiber-optic cable supports lower bandwidth at longer distances than does multimode fiber-optic cable. Devices that use single-mode fiber-optic cable typically use lasers to generate the light that travels through the cable. SMF can transmit up to 10 Gbps for up to 40 kilometers (25.85 miles), depending on the standard used.
We have talked about several different types of cables, and it's possible that you will be asked to know maximum distances and transmission speeds on the A+ exam. Table 5.3 summarizes the most common cable types, the specifications with which they are used, and their characteristics.
Cable type | Ethernet specification | Maximum speed | Maximum distance | Notes |
---|---|---|---|---|
RG-6 coax | 1 | 2 | 304 meters | Digital cable/satellite television |
RG-59 coax | 3 | 4 | 228 meters | Analog cable TV |
Cat 5 UTP or STP | 100BaseT | 100 Mbps | 100 meters | 100 Mbps and less use two pairs of wires. |
Cat 5e UTP | 1000BaseT | 1 Gbps | 100 meters | 1 Gbps and higher use four pairs of wires. |
Cat 6 UTP | 10GBaseT | 10 Gbps | 55 meters | Can support 1 Gbps up to 100 meters. |
Cat 6a UTP | 10GBaseT | 10 Gbps | 100 meters | |
Cat 7 UTP | 10GBaseT | 10 Gbps | 100 meters | Every wire pair is individually shielded. |
CAT 8 UTP | 25GBaseT or 40GBaseT | 40 Gbps | 100 meters at 10 Gbps | 25 Gbps or 40 Gbps at 30 meters |
MMF fiber | 1000BaseLX or 1000BaseSX | 1 Gbps | 550 meters | For fiber, maximum length depends on fiber size and quality. |
MMF fiber | 10GBaseSR or 10GBaseSW | 10 Gbps | 300 meters | |
SMF fiber | 10GBaseER or 10GBaseEW | 10 Gbps | 40 kilometers |
a * RG-6 and RG-59 coax cables can be used with many different specifications, and the maximum speed depends on cable quality and specification.
TABLE 5.3 Common cable types and characteristics
There are literally dozens of fiber-optic connectors out there because it seemed that every producer wanted its proprietary design to become “the standard.” Three of the most commonly used ones are ST, SC, and LC.
The straight tip (ST) fiber-optic connector, developed by AT&T, is probably the most widely used fiber-optic connector. It uses a twist-and-lock attachment mechanism that makes connections and disconnections fairly easy. The ease of use of the ST is one of the attributes that make this connector so popular. Figure 5.30 shows ST connectors.
FIGURE 5.30 ST connectors
The subscriber connector (SC), also sometimes known as a square connector, is shown in Figure 5.31. SCs are latched connectors, making it virtually impossible for you to pull out the connector without releasing its latch, usually by pressing a button or release. SCs work with either single-mode or multimode optical fibers. They aren't as popular as ST connectors for LAN connections.
FIGURE 5.31 A sample SC
The last type of connector with which you need to be familiar is the Lucent connector (LC), sometimes also called a local connector, which was developed by Lucent Technologies. It is a mini form factor (MFF) connector, especially popular for use with Fibre Channel adapters, fast storage area networks, and Gigabit Ethernet adapters (see Figure 5.32).
FIGURE 5.32 LC fiber connector
The prices of network cables differ dramatically between copper and fiber cables. Exercise 5.1 asks you to investigate the difference for yourself.
Network cabling can link one computer to another, but most networks are far grander in scale than two simple machines. There are a variety of networking devices that provide connectivity to the network, make the network bigger, and offer auxiliary services to end users.
In the following sections, we're going to classify additional networking components into two broad categories: connectivity devices and auxiliary devices. We'll also touch on software-defined networking, a concept that turned classical networking on its head.
We all know that if you want to be part of a computer network, you need to attach to that network somehow. Using network cables is one way to accomplish this, but not everyone is in a position to just plug a cable in and go. In addition, if you want to grow your network beyond a few simple connections, you need to use a special class of networking devices known as connectivity devices. These devices allow communications to break the boundaries of local networks and really provide the backbone for nearly all computer networks, regardless of size.
There are several categories of connectivity devices. These connectivity devices make it possible for users to connect to networks and to lengthen networks to almost unlimited distances. We will now discuss the most important and frequently used connectivity devices.
If you want to connect to a network or the Internet using plain old phone lines and a dial-up connection, a modem is the device you'll need. Modems got their name because they modulate and demodulate (mo-dem) digital signals that computers use into analog signals that can be passed over telephone lines. In the early to mid-1990s, modems were practically the only device available to get onto the Internet. Many companies also used them to allow users who were not in the office to dial into the local network.
While modems did provide flexibility, you needed to be near a phone line, and speed was an issue. The fastest modems transferred data at 56 Kbps. At the time that felt lightning quick, but fortunately our species has moved well beyond that technology. It's horrifically slow by today's standards and therefore rarely used.
The traditional modem is essentially obsolete—most homes and many businesses now access the Internet through the use of a cable modem or digital subscriber line (DSL) modem. The primary difference between the two is the infrastructure they connect to. Cable modems use television cable lines, and DSL modems use telephone lines.
Both cable and DSL modems are digital and therefore aren't technically modems because they don't modulate and demodulate analog signals. We'll cover cable Internet and DSL technologies in more detail in Chapter 7.
Fiber-optic connections to businesses and homes are becoming more and more common, as communications providers race to install fiber all over the country. If there is fiber in your work or home neighborhood, you need a different type of modem to connect to the ISP for Internet access. Such a device is called an optical network terminal (ONT) modem.
Much like cable and DSL modems, an ONT isn't truly a modem either, as it doesn't deal with analog-to-digital modulation. It is closer to a modem in a sense though because it takes optical signals and changes them into electrical ones for your internal home or business network. ONTs are typically located out of sight in a wiring closet or at the junction box on the outside of the building, where the optical cabling comes to an end.
Technically speaking, an access point is any point that allows a user on to a network. On a wired network, this means a hub or a switch, both of which we will cover shortly. The term is commonly used in reference to a wireless access point, which lets users connect to your network via an 802.11 technology. We'll get deeper into wireless access points and how to configure them in Chapter 7.
A repeater, or extender, is a small, powered device that receives a signal, amplifies it, and sends it on its way. The whole purpose of a repeater is to extend the functional distance of a cable run. For example, you know that UTP is limited to 100 meters, but what if you need to make a cable run that is 160 meters long? (One answer could be to use fiber, but pretend that's not an option.) You could run two lengths of cable with a repeater in the center, and it would work. Repeaters and extenders work at the Physical layer (Layer 1) of the OSI model. They don't examine the data or make any changes to it—they just take what they receive and send it along its merry way.
A hub is a device used to link several computers together. Hubs are very simple devices that possess no real intelligence. They simply repeat any signal that comes in on one port and copy it to the other ports (a process that is also called broadcasting). You'll sometimes hear them referred to as multiport repeaters. They work at Layer 1 of the OSI model, just as repeaters do.
There are two types of hubs: active and passive. Passive hubs connect all ports together electrically but do not have their own power source. Think of them as a multiport repeater. Active hubs use electronics to amplify and clean up the signal before it is broadcast to the other ports. Active hubs can therefore be used to extend the length of a network, whereas passive hubs cannot.
A patch panel is essentially a large hub that is rack mounted. It houses multiple cable connections but possesses no network intelligence. Its sole purpose is to connect cables together. Short patch cables are used to plug into the front-panel connectors, and there are longer, more permanent cables on the back. Figure 5.33 shows three rack-mounted devices. The top one is a 24-port patch panel. Underneath that is a 24-port switch, and then a Dell server is shown.
FIGURE 5.33 A patch panel, switch, and server
Switches work at Layer 2 and they provide centralized connectivity, just like hubs. A switch often looks similar to a hub, so it's easy to confuse them. There are big performance differences, though. Hubs pass along all traffic, but switches examine the Layer 2 header of the incoming packet and forward it properly to the right port and only that port. This greatly reduces overhead and thus improves performance because there is essentially a virtual connection between sender and receiver. The only downside is that switches forward broadcasts because they are addressed to everyone. Switches come in two varieties: unmanaged and managed. We've already explained the functionality of an unmanaged switch—it connects two or more computers, and passes along all traffic sent to a MAC address to its port. A managed switch adds the ability to configure ports, manage traffic, and monitor traffic for issues. For management, the switch will use a network protocol, such as Simple Network Management Protocol (SNMP). (We'll talk about SNMP in depth in Chapter 6.) Managed switches cost more but provide features such as quality of service (QoS), redundancy, port mirroring, and virtual LANs (VLANs). Here's a description of each:
Nearly every hub or switch that you will see has one or more status indicator lights on it. If there is a connection to a port of the switch, a light either above the connector or on an LED panel elsewhere on the device will light up. If traffic is crossing the port, the light may flash, or there may be a secondary light that will light up. Many devices can also detect a problem in the connection. If a normal connection produces a green light, a bad connection might produce an amber light.
Routers are highly intelligent devices that connect multiple network types and determine the best path for sending data. They can route packets across multiple networks and use routing tables to store network addresses to determine the best destination. Routers operate at the Network layer (Layer 3) of the OSI model. Because of this, they make their decisions on what to do with traffic based on logical addresses, such as an IP address.
Routers have a few key functions:
In the last decade or so, wireless routers have become common for small business and home networks. They possess all the functionality of routers historically associated with networking, but they are relatively inexpensive. We'll talk more about these routers in Chapter 7.
The devices we just talked about are specialized to provide connectivity. This next group of devices adds in features outside of connectivity that can help network users, specifically by protecting them from malicious attacks, providing network connections over power lines, and providing power over Ethernet cables.
A firewall is a hardware or software solution that serves as your network's security guard. They're probably the most important devices on networks that are connected to the Internet. Firewalls can protect you in two ways: they protect your network resources from hackers lurking in the dark corners of the Internet, and they can simultaneously prevent computers on your network from accessing undesirable content on the Internet. At a basic level, firewalls filter packets based on rules defined by the network administrator.
Firewalls can be stand-alone “black boxes,” software installed on a server or router, or some combination of hardware and software. Most firewalls will have at least two network connections: one to the Internet, or public side, and one to the internal network, or private side. Some firewalls have a third network port for a second semi-internal network. This port is used to connect servers that can be considered both public and private, such as web and email servers. This intermediary network is known as a screened subnet (formerly called a demilitarized zone [DMZ]).
Firewalls can be network based in that they protect a group of computers (or an entire network), or they can be host based. A host-based firewall (such as Windows Defender Firewall) protects only the individual computer on which it's installed.
A firewall is configured to allow only packets that pass specific security restrictions to get through. By default, most firewalls are configured as default deny, which means that all traffic is blocked unless specifically authorized by the administrator. The basic method of configuring firewalls is to use an access control list (ACL). The ACL is the set of rules that determines which traffic gets through the firewall and which traffic is blocked. ACLs are typically configured to block traffic by IP address, port number, domain name, or some combination of all three.
Occasionally, you will find yourself in a spot where it's not possible to run cables for a network connection and wireless is a problem as well. For example, perhaps you are installing a device that only has a wired RJ-45 port but you can't get a cable to it. Ethernet over Power can help make that connection by using electrical outlets; an adapter is shown in Figure 5.34.
FIGURE 5.34 Ethernet over Power adapter
For Ethernet over Power to work, both devices must be on the same electrical circuit, such as would be the case for a house or a small building. To connect the devices, plug both in and then press a button on the side of each device. They will search the electrical circuit for the signal from the other and negotiate the connection. As you can see in Figure 5.34, an Ethernet cable also connects to the device. You can plug that cable into a device directly or into a connectivity device, such as a hub or a switch.
If you can run an Ethernet signal over power lines, why can't you run electricity over network cables? As it turns out, you can—with Power over Ethernet (PoE). This technology is extremely useful in situations where you need a wireless access point in a relatively remote location that does not have any power outlets. For it to work, the access point and the device it plugs into (such as a switch) both need to support PoE. In a configuration such as this, the switch would be considered an endspan PoE device, because it's at the end of the network connection. If the switch in question doesn't support PoE, you can get a device that sits between the switch and the access point (called a midspan device) whose sole purpose is to supply power via the Ethernet connection. Appropriately, these midspan devices are called Power over Ethernet injectors.
The first PoE standard was IEEE 802.3af, released in 2003, and it provided up to 15.4 W of DC power to connected devices. This was enough for wireless access points as well as basic surveillance cameras and VoIP phones, but not enough for videoconferencing equipment, alarm systems, laptops, or flat-screen monitors. Enhancements to the standard have been made over the years to support more power-hungry devices. Table 5.4 lists the standards you should be familiar with.
Name | Year | IEEE standard | Max power | Supported devices |
---|---|---|---|---|
PoE | 2003 | 802.3af | 15.4 W | Wireless access points, static surveillance cameras, VoIP phones |
PoE+ | 2009 | 802.3at | 30 W | Alarm systems, PTZ (point/tilt/zoom) cameras, video IP phones |
PoE++ | 2018 | 802.3bt (Type 3) | 60 W | Multi-radio wireless access points, video conferencing equipment |
PoE++ | 2018 | 802.3bt (Type 4) | 100 W | Laptops, flat-screen monitors |
TABLE 5.4 PoE standards
Talking about software-defined networking (SDN) in a section on networking hardware honestly feels a bit odd, because SDN is essentially setting up a network virtually, without the physical hardware connectivity devices that most people are used to. In a sense, it's a network without the network hardware. When it came out, it was radical enough to blow the minds of many networking professionals. It's all enabled by the cloud, which we will cover more in Chapter 8. For now, though, to help illustrate what SDN is, let's first look at a relatively simple network layout, such as the one shown in Figure 5.35.
FIGURE 5.35 A sample network
The network in Figure 5.35 has two routers, including one that connects the corporate network to the Internet. Four switches manage internal network traffic, and client devices connect to the switches. New network clients can attach to existing switches, and if the switches run out of ports, more can be added. Of course, in today's environment, we should draw in wireless access points and their clients as well. The wireless access points will connect to a switch or router with a network cable. Adding additional switches, routers, or other network control devices requires purchasing and installing the device and some configuration, but it's nothing that a good net admin can't handle.
Large enterprise networks are significantly more complex and include more routers and perhaps load balancers, firewalls, and other network appliances. Adding to the network becomes more complicated. In particular, adding more routers requires a lot of reconfiguration so that the routers know how to talk to each other.
Routers play a critical role in intra-network communications. The router's job is to take incoming data packets, read the destination address, and send the packet on to the next network that gets the data closer to delivery. There are two critical pieces to the router's job:
In a traditional networking environment, each router is responsible for maintaining its own table. While almost all routers have the ability to talk to their neighbor routers for route updates, the whole setup is still pretty complicated for administrators to manage. The complexity can really become a problem when you are troubleshooting data delivery problems.
Enter SDN. The goal of SDN is to make networks more agile and flexible by separating the forwarding of network packets (the infrastructure layer) from the logical decision-making process (the control layer). The control layer consists of one or more devices that make the decisions on where to send packets—they're the brains of the operation. The physical devices then just forward packets based on what the control layer tells them. Figure 5.36 illustrates the logical SDN structure.
FIGURE 5.36 Software-defined networking
In addition to agility and flexibility, a third advantage to using SDN is centralized network monitoring. Instead of running monitoring apps for each individual piece of network hardware, the SDN software can monitor all devices in one app.
The SDN controller acts as an abstraction layer. Applications that need to use the network actually interface with the SDN controller, thinking that they are working directly with the networking hardware. In the end, data still gets from point A to point B, so any distinction between how that happens isn't important. Because the abstraction layer exists, though, the underlying network hardware and configuration can change, and it won't affect how the applications work. It's the job of the SDN controller to understand how to talk to the infrastructure.
To make things even more fun, SDN can be used to create virtual networks without any hardware at all. Imagine having five logical servers running in a cloud, all using the same hardware. If they want to talk to each other, they will send data the way they know how to—that is, to their network cards for delivery on the network, likely through a switch or router. But if they are using the same hardware, then they all have the same network adapter. That makes things weird, right? Well, not really, because SDN manages the communications between the servers. Each server will be assigned a logical NIC and communicate to the others via their logical NICs. SDN manages it all, and there are no communication issues.
In this chapter, we covered a broad variety of networking topics. This chapter contains everything that you need to get you ready for the networking questions on the A+ 220-1101 exam. At the same time, the A+ exam (and consequently this chapter) barely scratches the surface of the things that you can learn about networking. If making computers talk to each other effectively is an area of interest to you, we suggest that you consider studying for the CompTIA Network+ exam after you pass your A+ tests.
First, we started with networking fundamentals. Much of the discussion of fundamentals was about understanding the concepts behind networking so that you know how to set them up. Topics included LANs versus WANs; clients, servers, and resources; network operating systems; peer-to-peer and server-based resource models; network topologies, such as bus, star, and ring; and theoretical networking models and standards, such as the OSI model and IEEE standards.
Next, you learned about hardware devices used in networking. Each computer needs a network adapter (NIC) of some sort to connect to the network. On a wired network, cables are required, and there are several different types, including coaxial, STP, UTP, and fiber-optic. Each cable type has its own specific connector.
Finally, we discussed various types of network connectivity hardware and auxiliary devices and their use. Some users may need a cable modem, DSL modem, ONT, or access point to get onto the network. All wired computers will plug into a connectivity device, such as a hub or a switch, which in turn is connected to another connectivity device, which is often a router. Other devices on the network, such as firewalls, Ethernet over Power, and PoE injectors, provide additional services. And software-defined networking virtualizes all the network hardware rather than using physical devices.
Know the difference between LANs, WANs, PANs, MANs, SANs, and WLANs. A LAN is a local area network, which typically means a network in one centralized location. A WAN is a wide area network, which means several LANs in remote locations connected to each other. A PAN is a small Bluetooth network. A network that spans an area such as a city or a campus is a MAN. A SAN is designed specifically for storage, and a WLAN is like a LAN but wireless.
Know how computers connect to a network. It might seem simple, but remember that all computers need a NIC to connect to the network. There's a lot of configuration that happens automatically, and you may need to reconfigure the NIC or update drivers if things don't work properly.
Know about the different types of copper cable. The three types of copper cable you should know about are coaxial, unshielded twisted pair (UTP), and shielded twisted pair (STP). UTP comes in various types, including Cat 5, Cat 5e, Cat 6, and Cat 6a (among others, but these are the current standards in the exam objectives). For outdoor use, go with direct burial cable.
Understand the difference between a patch (straight-through) cable and a crossover cable. Patch cables are used to connect hosts to a switch or a hub. Crossover cables switch pins 1 and 3 and 2 and 6 on one end. They are used to connect hubs to hubs, switches to switches, hosts to hosts, and hosts to routers.
Memorize the T568A and T568B cable standards. As painful as it might sound, you should memorize the pin order for these two standards. The T568A order is white/green, green, white/orange, blue, white/blue, orange, white/brown, brown. T568B is white/orange, orange, white/green, blue, white/blue, green, white/brown, brown. If it helps, note that the blue and brown pairs do not change; only the green and orange pairs do.
Know what a plenum cable is used for. Plenum cables do not release toxic gas when burned and therefore are required in spaces that circulate air (plenums) within buildings.
Understand performance characteristics of fiber-optic cable. Fiber can support higher transmission rates and longer distances than copper cable can. It's also immune to electrical interference.
Know which types of connectors are used for the different types of network cables. Coaxial cable uses F type or BNC connectors. Twisted pair uses RJ-11 or RJ-45 connectors or can be terminated at a punchdown block. Fiber connectors include straight tip (ST), subscriber connector (SC), and Lucent (or local) connector (LC).
Know which networking devices are used to connect to the Internet. Internet connections used to be made through modems on plain old telephone lines. Digital connections today are made through cable modems and DSL modems, and optical connections through optical network terminals (ONTs).
Know what hubs, switches, access points, patch panels, and routers are. These are all network connectivity devices. Hubs and switches are used to connect several computers or groups of computers to each other. Switches can be managed or unmanaged. An access point is any port where a computer plugs into a network, but the term typically refers to wireless access points. Patch panels are rack-mounted devices with multiple (usually dozens of) wired access points. Routers are more complex devices that are often used to connect network segments or networks to each other.
Know what a firewall and Power over Ethernet (PoE) provides. A firewall is a security device that blocks or allows network traffic to pass through it. PoE provides for electricity over Ethernet cables.
Understand the premise of software-defined networking (SDN). SDN is a cloud service that virtualizes network hardware. Instead of requiring a physical switch or router, SDN can replicate their services through software.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answer compares to the authors', refer to Appendix B.
Look at the pictures of network cable connectors and label each one.
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Networking protocols are a lot like human languages in that they are the languages that computers speak when talking to each other. If computers don't speak the same language, they won't be able to communicate. To complicate matters, there are dozens of different languages that computers can use. Just like humans, computers can understand and use multiple languages. Imagine that you are on the street and someone comes up to you and speaks in Spanish. If you know Spanish, you will likely reply in kind. It doesn't matter if both of you know English as well because you've already established that you can communicate. On the other hand, it's going to be a pretty futile conversation if you don't know Spanish. This same concept applies to computers that are trying to communicate. They must have a network protocol in common in order for the conversation to be successful.
Throughout the years, hundreds of network protocols have been developed. As the use of networking exploded, various companies developed their own networking hardware, software, and proprietary protocols. Some were incorporated as an integral part of the network operating system, such as Banyan VINES. One-time networking giant Novell had IPX/SPX. Microsoft developed NetBEUI. Apple created AppleTalk. Others included DECnet, SNA, and XNS. While a few achieved long-term success, most have faded into oblivion. The one protocol suite that has survived is TCP/IP. Although it has some structural advantages, such as its modularity, it didn't necessarily succeed because it was inherently superior to other protocols. It succeeded because it is the protocol of the Internet.
This chapter focuses on the TCP/IP protocol suite. It is the protocol suite used on the Internet, but it's also the protocol suite used by nearly every home and business network today. We'll start by taking a quick look at the history of TCP/IP and the model on which it's based. Then we'll dive deeper into TCP/IP structure and the individual protocols it comprises. From there, we'll spend some time on IP addressing, including IPv4 and IPv6. Entire books have been written on TCP/IP—so there's no way we could cover it entirely in one chapter. Nor do you need to know every last detail right now. Instead, we'll give you the foundation that you need to understand it well, work effectively with it in the field, and pass the A+ exam.
As we mentioned in the introduction, computers use a protocol as a common language for communication. A protocol is a set of rules that govern communications, much like a language in human terms. Of the myriad protocols out there, the key ones to understand are the protocols in the TCP/IP suite, which is a collection of different protocols that work together to deliver connectivity. Consequently, they're the only ones listed on the A+ exam objectives. In the following sections, we'll start with a look at its overall structure and then move into key protocols within the suite.
The Transmission Control Protocol/Internet Protocol (TCP/IP) suite is the most popular network protocol in use today, thanks mostly to the rise of the Internet. While the protocol suite is named after two of its hardest-working protocols, Transmission Control Protocol (TCP) and Internet Protocol (IP), TCP/IP actually contains dozens of protocols working together to help computers communicate with one another.
TCP/IP is robust and flexible. For example, if you want to ensure that the packets are delivered from one computer to another, TCP/IP can do that. If speed is more important than guaranteed delivery, then TCP/IP can provide that too. The protocol can work on disparate operating systems, such as UNIX, Linux, macOS, Windows, iOS, and Android. It can also support a variety of programs, applications, and required network functions. Much of its flexibility comes from its modular nature.
You're familiar with the seven-layer OSI model that we discussed in Chapter 5, “Networking Fundamentals.” Every protocol that's created needs to accomplish the tasks (or at least the key tasks) outlined in that model. The structure of TCP/IP is based on a similar model created by the U.S. Department of Defense—that is, the Department of Defense (DoD) model. The DoD model (sometimes referred to as the TCP/IP model) has four layers that map to the seven OSI layers, as shown in Figure 6.1.
FIGURE 6.1 The DoD and OSI models
The overall functionality between these two models is virtually identical; the layers just have different names. For example, the Process/Application layer of the DoD model is designed to combine the functionality of the top three layers of the OSI model. Therefore, any protocol designed against the Process/Application layer would need to be able to perform all the functions associated with the Application, Presentation, and Session layers in the OSI model.
TCP/IP's modular nature and common protocols are shown in Figure 6.2.
FIGURE 6.2 TCP/IP protocol suite
Working from the bottom up, you'll notice that the Network Access layer doesn't have any protocols, as such. This layer describes the type of network access method that you are using, such as Ethernet, Wi-Fi, or others.
The most important protocol at the Internet layer is IP. This is the backbone of TCP/IP. Other protocols at this layer work in conjunction with IP, such as Internet Control Message Protocol (ICMP) and Address Resolution Protocol (ARP).
At the Host-to-Host layer, there are only two protocols: TCP and User Datagram Protocol (UDP). Most applications will use one or the other to transmit data, although some can use both but will do so for different tasks.
The majority of TCP/IP protocols are located at the Process/Application layer. These include some protocols with which you may already be familiar, such as Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Post Office Protocol (POP), and others. Let's take a look at each of the layers in more detail.
At the Internet layer, there's one key protocol and a few helpful support protocols. The main workhorse of TCP/IP is the Internet Protocol (IP), and it can be found at this layer. IP is responsible for managing logical network addresses and ultimately getting data from point A to point B, even if there are dozens of points in between. We cover IP addressing in depth in the “Understanding IP Addressing” section later in this chapter.
There are three support protocols you should be aware of at this
layer as well. Internet Control Message Protocol (ICMP) is
responsible for delivering error messages. If you're familiar with the
ping
utility, you'll know that it utilizes ICMP to send and
receive packets. Address Resolution Protocol (ARP) resolves
logical IP addresses to physical MAC addresses built into network cards.
This function is critical because in order to communicate, the sender
ultimately needs to know the MAC address of the receiver. Reverse ARP
(RARP) resolves MAC addresses to IP addresses.
Next up is the Host-to-Host layer, and it has the fewest protocols. At this layer there are two alternatives within the TCP/IP suite: TCP and UDP. The major difference between the two is that TCP guarantees packet delivery through the use of a virtual circuit and data acknowledgments and UDP does not. Because of this, TCP is often referred to as connection-oriented, whereas UDP is connectionless. Because UDP is connectionless, it does tend to be somewhat faster, but we're talking about milliseconds here.
Another key concept to understand about TCP and UDP is the use of port numbers. Imagine a web server that is managing connections from incoming users who are viewing web content and others who are downloading files. TCP and UDP use port numbers to keep track of these conversations and make sure that the data gets to the right application and right end user. Conversely, when a client makes a request of a server, it needs to do so on a specific port to make sure that the right application on the server hears the request. For example, web servers are listening for HTTP requests on port 80, so web browsers need to make their requests on that port.
A good analogy for understanding port numbers is cable or satellite television. In this analogy, the IP address is your house. The cable company needs to know where to send the data. But once the data is in your house, which channel are you going to receive it on? If you want sports, that might be on one channel, but weather is on a different channel, and the cooking show is on yet another. You know that if you want a cooking show, you need to turn to channel 923 (or whatever). Similarly, the client computer on a network knows that if it needs to ask a question in HTTP, it needs to do it on port 80.
There are 65,536 ports, numbered from 0 to 65535. Ports 0 through 1023 are called the well-known ports and are assigned to commonly used services, and 1024 through 49151 are called the registered ports. All the ports from 49152 to 65535 are free to be used by application vendors. Fortunately, you don't need to memorize them all.
Table 6.1 shows the ports used by some of the more common protocols. You should know each of these for the A+ exam.
Service | Protocol | Port(s) |
---|---|---|
FTP | TCP | 20, 21 |
SSH | TCP | 22 |
Telnet | TCP | 23 |
SMTP | TCP | 25 |
DNS | TCP/UDP | 53 |
DHCP | UDP | 67, 68 |
TFTP | UDP | 69 |
HTTP | TCP | 80 |
POP3 | TCP | 110 |
NetBIOS/NetBT | TCP | 137, 139 |
IMAP4 | TCP | 143 |
SNMP | UDP | 161, 162 |
LDAP | TCP | 389 |
HTTPS | TCP | 443 |
SMB/CIFS | TCP | 445 |
RDP | TCP | 3389 |
TABLE 6.1 Common port numbers
A complete list of registered port numbers can be found at iana.org
and several other
sites, such as Wikipedia.
As we mentioned earlier in the chapter, most of the protocols within the TCP/IP suite are at the Process/Application layer. This is the layer of differentiation and flexibility. For example, if you want to browse the Internet, the HTTP protocol is designed for that. FTP is optimized for file downloads, and Simple Mail Transfer Protocol (SMTP) is used for sending email.
Before we get into the protocols themselves, let's take a quick look into a few key points on the TCP/IP suite's flexibility. There are literally dozens of protocols at the Process/Application layer, and they have been created over time as networking needs arose. Take HTTP, for example. The first official version was developed in 1991, nearly 20 years after TCP/IP was first implemented. Before this protocol was created, there weren't any effective client-server request-response protocols at this layer. HTTP let the client (web browser) ask the web server for a page, and the web server would return it. Going one step further, there was a need for secure transactions over HTTP—hence, the creation of HTTPS in 1994. As new applications are developed or new networking needs are discovered, developers can build an application or protocol that fits into this layer to provide the needed functionality. They just need to make sure that the protocol delivers what it needs to and can communicate with the layers below it. The following sections describe some of the more common Process/Application protocols and their ports—and the ones listed in the A+ exam objectives.
The File Transfer Protocol (FTP) is optimized to do what it says it does—transfer files. This includes both uploading and downloading files from one host to another. FTP is both a protocol and an application. Specifically, FTP lets you copy files, list and manipulate directories, and view file contents. You can't use it to execute applications remotely.
Whenever a user attempts to access an FTP site, they will be asked to log in. If it's a public site, you can often just use the login name anonymous and then provide your email address as the password. Of course, there's no rule saying that you have to give your real email address if you don't want to. If the FTP site is secured, you will need a legitimate login name and password to access it. If you are using a browser such as Chrome, Firefox, or Edge to connect via FTP, the correct syntax in the address window is ftp://username:password@ftp.ftpsite.com.
The big downside to FTP is that it's unsecure. It transmits usernames and passwords in plain text. If a potential hacker is monitoring network traffic, this information will come through quite clearly. Be aware of this when using FTP, and make sure the FTP password is something not used to log into any other services. For secure file transfers, there are other options including Secure FTP (SFTP) and FTP Secure (FTPS).
Secure Shell (SSH) is a connection-oriented protocol that
can be used to set up a secure Telnet session for remote logins or for
remotely executing programs and transferring files. Because it's secure,
it was originally designed to be a replacement for the unsecure
telnet
command. A common client interface using SSH is
called OpenSSH (www.openssh.com
).
Speaking of Telnet, it seems that it has been around since the beginning of time as a terminal emulation protocol. Someone using Telnet can log into another machine and “see” the remote computer in a window on their screen. Although this vision is text only, the user can manage files on that remote machine just as if they were logged in locally.
The problem with Telnet and other unsecure remote management options
(such as RCP
[remote copy] and FTP
) is that
the data they transmit, including passwords, is sent in plain text.
Anyone eavesdropping on the line can intercept the packets and thus
obtain usernames and passwords. SSH overcomes this by encrypting the
traffic, including usernames and passwords.
This is the first of three protocols we'll look at devoted to email. Simple Mail Transfer Protocol (SMTP) is the protocol most commonly used to send email messages. Because it's designed to send only, it's referred to as a push protocol. SMTP is the protocol used to send email from mail server to mail server as well as from a mail server to an email client. An email client locates its email server by querying the DNS server for a mail exchange (MX) record. After the server is located, SMTP is used to push the message to the email server, which will then process the message for delivery.
You probably use Domain Name System (DNS) every day whether
you realize it or not. Its purpose is to resolve hostnames to IP
addresses. For example, let's say that you open your web browser and
type in a Uniform Resource Locator (URL) such as https://www.wiley.com
.
Your computer needs to know the IP address of the server that hosts that
website in order for you to connect to it. Through a DNS server, your
computer resolves the URL to an IP address so communication can happen.
DNS is so critical that we have an entire section dedicated to it later
in this chapter.
Dynamic Host Configuration Protocol (DHCP) dynamically assigns IP addresses and other IP configuration information to network clients. Configuring your network clients to receive their IP addresses from a DHCP server reduces network administration headaches. We'll cover the mechanics of how DHCP works later in this chapter when we talk about IP addressing.
You already learned about FTP, and Trivial File Transfer Protocol (TFTP) is its lighter-weight cousin. It can transfer files much like FTP, but it's much simpler and faster. Table 6.2 highlights a few other key differences.
Feature | TFTP | FTP |
---|---|---|
Authentication | None required | Username/password (although you may be able to use anonymous) |
Protocol used | UDP (connectionless) | TCP (connection-oriented) |
Number of commands | 5 | About 70 |
Primary use | Transmitting configurations to and from network devices | Uploading and downloading files |
TABLE 6.2 TFTP vs. FTP
HTTP was once the most used Process/Application layer protocol. It manages the communication between a web server and client, and it lets you connect to and view all the content that you enjoy on the Internet. All the information transmitted by HTTP is plain text, which means that it's not secure. Therefore, it's not a good choice for transmitting sensitive or personal information, such as usernames and passwords, or for transmitting banking information. Because of that, it's been supplanted by HTTPS (covered later).
For a long time, Post Office Protocol 3 (POP3) was the preferred protocol for downloading email. It's been replaced in most installations by IMAP4 (covered later) because IMAP4 includes security and more features than POP3.
Network Basic Input/Output System (NetBIOS) is an application programming interface (API) that allows computers to communicate with each other over the network. It works at Layer 5 of the OSI model. Consequently, it needs to work with another network protocol to handle the functions of Layer 4 and below. NetBIOS running over TCP/IP is called NetBT, or NBT. Specifically, NetBIOS provides three services:
For many years, Microsoft network clients were configured with a
NetBIOS name, which was their network name. To communicate with another
computer on the network, the NetBIOS name would need to be resolved
(matched) to an IP address. This was done with a WINS (Windows Internet
Name Service) server or LMHOSTS
file and could not be
performed across any routed connection (which includes the
Internet).
If you're familiar with hostnames, they were somewhat analogous and
could be one and the same or totally different. (If you're not
familiar with hostnames and DNS, we cover it later in this chapter.) The
big differences are that hostnames are resolved with a DNS server (or
HOSTS
file) and can work across the Internet. WINS was far
inferior to DNS for name resolution, so Microsoft ended up adopting DNS
like the rest of the industry.
Internet Message Access Protocol (IMAP) is a secure protocol designed to download email. Its current version is version 4, or IMAP4. It's the client-side email management protocol of choice, having replaced the unsecure POP3. Most current email clients, such as Microsoft Outlook and Gmail, are configured to be able to use either IMAP4 or POP3. IMAP4 has some definite advantages over POP3. They include:
Simple Network Management Protocol (SNMP) gathers and manages network performance information.
On your network, you might have several connectivity devices, such as routers and switches. A management device called an SNMP server can be set up to collect data from these devices (called agents) and ensure that your network is operating properly. Although SNMP is mostly used to monitor connectivity devices, many other network devices are SNMP-compatible as well. The most current version is SNMPv3.
The Lightweight Directory Access Protocol (LDAP) is a directory services protocol based on the X.500 standard. LDAP is designed to access information stored in an information directory typically known as an LDAP directory or LDAP database.
On your network, you probably have a lot of information, such as employee phone books and email addresses, client contact lists, and infrastructure and configuration data for the network and network applications. This information might not get updated frequently, but you might need to access it from anywhere on the network, or you might have a network application that needs access to this data. LDAP provides you with the access, regardless of the client platform from which you're working. You can also use access control lists (ACLs) to set up who can read and change entries in the database using LDAP. A common analogy is that LDAP provides access to and the structure behind your network's phone book.
To encrypt traffic between a web server and client securely, Hypertext Transfer Protocol Secure (HTTPS) can be used. HTTPS connections are secured using either Secure Sockets Layer (SSL) or Transport Layer Security (TLS).
From the client (web browser) side, users will know that the site is secure because the browser will display a small padlock icon next to the address name.
Server Message Block (SMB) is a protocol originally developed by IBM but then enhanced by Microsoft, IBM, Intel, and others. It's used to provide shared access to files, printers, and other network resources and is primarily implemented by Microsoft systems. In a way, it can function a bit like FTP only with a few more options, such as the ability to connect to printers, and more management commands. It's also known for its ability to make network resources easily visible through various Windows network apps (such as Network in File Explorer).
Common Internet File System (CIFS) is a Microsoft-developed enhancement of the SMB protocol, which was also developed by Microsoft. The intent behind CIFS is that it can be used to share files and printers between computers, regardless of the operating system that they run. It's the default file and print sharing protocol in Windows.
Developed by Microsoft, the Remote Desktop Protocol (RDP) allows users to connect to remote computers and run programs on them. When you use RDP, you see the desktop of the computer you've signed into on your screen. It's like you're really there, even though you're not.
When you use RDP, the computer at which you are seated is the client and the computer you're logging into is the server. RDP client software is available for Windows, Linux, macOS, iOS, and Android. Microsoft's RDP client software is called Remote Desktop Connection. The server uses its own video driver to create video output and sends the output to the client using RDP. Conversely, all keyboard and mouse input from the client is encrypted and sent to the server for processing. RDP also supports sound, drive, port, and network printer redirection. In a nutshell, this means that if you could see, hear, or do it sitting at the remote computer, you could see, hear, or do it at the RDP client too.
Services using this protocol can be great for telecommuters. It's also very handy for technical support folks, who can log into and assume control over a remote computer. It's a lot easier to troubleshoot and fix problems when you can see what's going on and “drive.”
To communicate on a TCP/IP network, each device needs to have a unique IP address. Any device with an IP address is referred to as a host. This can include servers, workstations, printers, routers, and other devices. If you can assign it an IP address, it's a host. As an administrator, you can assign the host's IP configuration information manually, or you can have it automatically assigned by a DHCP server. On the client, this is done through the network adapter's TCP/IP properties. You'll see in Figure 6.3 that the system is set to receive information automatically from a DHCP server. We'll look at how to configure this in more depth in Chapter 7, “Wireless and SOHO Networks.”
FIGURE 6.3 TCP/IP Properties
An IPv4 address is a 32-bit hierarchical address that identifies a host on the network. It's typically written in dotted-decimal notation, such as 192.168.10.55. Each of the numbers in this example represents 8 bits (or 1 byte) of the address, also known as an octet. The same address written in binary (how the computer thinks about it) would be 11000000 10101000 00001010 00110111. As you can see, the dotted-decimal version is a much more convenient way to write these numbers.
The addresses are said to be hierarchical, as opposed to “flat,” since the numbers at the beginning of the address identify groups of computers that belong to the same network. Because of the hierarchical address structure, we're able to do really cool things, such as route packets between local networks and on the Internet.
A great example of hierarchical addressing is your street address. Let's say that you live in apartment 4B at 123 Main Street, Anytown, Kansas, USA. If someone sent you a letter via snail mail, the hierarchy of your address helps the postal service and carrier deliver it to the right place. First and broadest is USA. Kansas helps narrow it down a bit, and Anytown narrows it down more. Eventually we get to your street, the right number on your street, and then the right apartment. If the address space were flat (for example, Kansas didn't mean anything more specific than Main Street), or you could use any name you wanted for your state, it would be really hard to get the letter to the right spot.
Take this analogy back to IP addresses. They're set up to organize networks logically in order to make delivery between them possible and then to identify an individual node within a network. If this structure weren't in place, a huge, multi-network space like the Internet probably wouldn't be possible. It would simply be too unwieldy to manage.
As we mentioned earlier, each IP address is written in four octets in dotted-decimal notation, but each octet represents 8 bits. A binary bit is a value with two possible states: on equals 1 and off equals 0. If the bit is turned on, it has a decimal value based upon its position within the octet. An off bit always equals 0. Take a look at Figure 6.4, which will help illustrate what we mean.
FIGURE 6.4 Binary values
If all the bits in an octet are off, or 00000000, the corresponding decimal value is 0. If all bits in an octet are on, you would have 11111111, which is 255 in decimal.
Where it starts to get more entertaining is when you have combinations of zeroes and ones. For example, 10000001 is equal to 129 (128 + 1), and 00101010 is equal to 42 (32 + 8 + 2).
As you work with IPv4 addresses, you'll see certain patterns emerge. For example, you may be able to count quickly from left to right in an octet pattern, such as 128, 192, 224, 240, 248, 252, 254, and 255. That's what you get if you have (starting from the left) 1, 2, 3, and so forth up to 8 bits on in sequence in the octet.
It's beyond the scope of this book to get into too much detail on binary-to-decimal conversion, but this primer should get you started.
Each IP address is made up of two components: the network ID and the host ID. The network portion of the address always comes before the host portion. Because of the way IP addresses are structured, the network portion does not have to be a specific fixed length. In other words, some computers will use 8 of the 32 bits for the network portion and the other 24 for the host portion, whereas other computers might use 24 bits for the network portion and the remaining 8 bits for the host portion. Here are a few rules that you should know about when working with IP addresses:
Computers are able to differentiate where the network ID ends and the host address begins through the use of a subnet mask. This is a value written just like an IP address and may look something like 255.255.255.0. Any bit that is set to a 1 in the subnet mask makes the corresponding bit in the IP address part of the network ID (regardless of whether the bit in the IP address is on or off). When setting bits to 1 in a subnet mask, you always have to turn them on sequentially from left to right, so that the bits representing the network address are always contiguous and come first. The rest of the address will be the host ID. The number 255 is the highest number you will ever see in IP addressing, and it means that all bits in the octet are set to 1.
Here's an example based on two numbers that we have used in this chapter. Look at the IP address of 192.168.10.55. Let's assume that the subnet mask in use with this address is 255.255.255.0. This indicates that the first three octets are the network portion of the address and the last octet is the host portion; therefore, the network portion of this ID is 192.168.10 and the host portion is 55. If the subnet mask were 255.255.0.0, the computer would see its network address as 192.168 and its host address as 10.55. As you can see, the subnet mask can make the exact same address appear as though it's on a different network. If you're ever dealing with network communication issues, the IP address and subnet mask are among the first things you should check.
FIGURE 6.5 Manual TCP/IP configuration with an IP address, subnet mask, and default gateway
The designers of TCP/IP designated classes of networks based on the first 3 bits of the IP address. As you will see, classes differ in how many networks of each class can exist and the number of unique hosts that each network can accommodate. Here are some characteristics of the three classes of addresses that you will commonly deal with:
Table 6.3 shows the IPv4 classes, their ranges, and their default subnet masks.
Class | First octet | Default subnet mask | Comments |
---|---|---|---|
A | 1–127 | 255.0.0.0 | For very large networks; 127 reserved for the loopback address |
B | 128–191 | 255.255.0.0 | For medium-sized networks |
C | 192–223 | 255.255.255.0 | For smaller networks with fewer hosts |
D | 224–239 | N/A | Reserved for multicasts (sending messages to multiple systems) |
E | 240–255 | N/A | Reserved for testing |
TABLE 6.3 IPv4 address classes
The IP address can be written in shorthand to show how many bits are being used for the network portion of the address. For example, you might see something like 10.0.0.0/8. The /8 on the end indicates that the first 8 bits are the network portion of the address, and the other 24 are the host portion. Another example is 192.168.1.0/24, which is a Class C network with a default subnet mask.
The default subnet masks for each class of address are by no means the only subnet masks that can be used. In fact, if they were, it would severely limit the number of possible TCP/IP networks available. To resolve this and provide additional addressing flexibility, there is classless inter-domain routing (CIDR). This is just a fancy of way of saying, “You don't have to use the default subnet masks.” From a practical standpoint, CIDR minimizes the concept of IP address classes and primarily focuses on the number of bits that are used as part of the network address.
Taking a look at the defaults can help illustrate how CIDR works. If you have a Class A default mask of 255.0.0.0, that is 11111111.00000000.00000000.00000000 in binary. A Class B default mask of 255.255.0.0 is 11111111.11111111.00000000.00000000 in binary. There's no rule that says you have to use an entire octet of bits to represent the network portion of the address. The only rule is that you have to add 1s in a subnet mask from left to right. What if you wanted to have a mask of 255.240.0.0 (11111111.11110000.00000000.00000000); can you do that? The answer is yes, and that is essentially what CIDR does. Table 6.4 shows you every available subnet mask and its equivalent slash notation.
Subnet mask | Notation |
---|---|
255.0.0.0 | /8 |
255.128.0.0 | /9 |
255.192.0.0 | /10 |
255.224.0.0 | /11 |
255.240.0.0 | /12 |
255.248.0.0 | /13 |
255.252.0.0 | /14 |
255.254.0.0 | /15 |
255.255.0.0 | /16 |
255.255.128.0 | /17 |
255.255.192.0 | /18 |
255.255.224.0 | /19 |
255.255.240.0 | /20 |
255.255.248.0 | /21 |
255.255.252.0 | /22 |
255.255.254.0 | /23 |
255.255.255.0 | /24 |
255.255.255.128 | /25 |
255.255.255.192 | /26 |
255.255.255.224 | /27 |
255.255.255.240 | /28 |
255.255.255.248 | /29 |
255.255.255.252 | /30 |
TABLE 6.4 CIDR values
Earlier, we said that CIDR minimizes the impact of classes, but there are still some restrictions. The /8 through /15 notations can be used only with Class A network addresses; /16 through /23 can be used with Class A and B network addresses; /24 through /30 can be used with Class A, B, and C network addresses. You can't use anything more than /30, because you always need at least 2 bits for hosts.
Now that you know that you can do it, the question is, why would you do it? The answer is that it provides you with the flexibility to configure your network.
Here's an example. Say that your default network address is 10.0.0.0/8. That means that you have 24 bits left for hosts on that one network, so you can have just over 16.7 million hosts. How realistic is it that one company will have that many hosts? It's not realistic at all, and that doesn't even bring up the issue that the network infrastructure wouldn't be able to handle physically having that many hosts on one network. However, let's say that you work for a large corporation with about 15 divisions and some of them have up to 3,000 hosts. That's plausible. What you can do is to set up your network so that each division has its own smaller portion of the network (a subnet) big enough for its needs. To hold 3,000 hosts and have a bit of room for expansion, you need 12 bits (212 – 2 = 4,094), meaning that you have 20 bits left over for the network address. Thus, your new configuration could be 10.0.0.0/20.
All the addresses that are used on the Internet are called public addresses. They must be purchased, and only one computer can use any given public address at one time. The problem that presented itself was that the world was soon to run out of public IP addresses while the use of TCP/IP was growing. Additionally, the structure of IP addressing made it impossible to “create” or add any new addresses to the system.
To address this, a solution was devised to allow for the use of TCP/IP without requiring the assignment of a public address. The solution was to use private addresses. Private addresses are not routable on the Internet. They were intended for use on private networks only. That private addresses weren't intended for use on the Internet freed us from the requirement that all addresses be globally unique. This essentially created an infinite number of IP addresses that companies could use within their own network walls.
Although this solution helped alleviate the problem of running out of addresses, it created a new one. The private addresses that all of these computers have aren't globally unique, but they need to be in order to access the Internet.
A service called Network Address Translation (NAT) was created to solve this problem. NAT runs on your router and handles the translation of private, nonroutable IP addresses into public IP addresses. There are three ranges reserved for private, nonroutable IP addresses, as shown in Table 6.5. You should memorize these ranges and be able to identify them on sight.
Class | IP address range | Default subnet mask | Number of hosts |
---|---|---|---|
A | 10.0.0.0–10.255.255.255 | 255.0.0.0 | 16.7 million |
B | 172.16.0.0–172.31.255.255 | 255.240.0.0 | 1 million |
C | 192.168.0.0–192.168.255.255 | 255.255.0.0 | 65,536 |
TABLE 6.5 Private IP address ranges
These private addresses cannot be used on the Internet and cannot be routed externally. The fact that they are not routable on the Internet is actually an advantage because a network administrator can use them essentially to hide an entire network from the Internet.
This is how it works: The network administrator sets up a NAT-enabled router, which functions as the default gateway to the Internet. The external interface of the router has a public IP address assigned to it that has been provided by the ISP, such as 155.120.100.1. The internal interface of the router will have an administrator-assigned private IP address within one of these ranges, such as 192.168.1.1. All computers on the internal network will then also need to be on the 192.168.1.0 network. To the outside world, any request coming from the internal network will appear to come from 155.120.100.1. The NAT router translates all incoming packets and sends them to the appropriate client. This type of setup is very common today.
You may look at your own computer, which has an address in a private range, and wonder, “If it's not routable on the Internet, then how am I on the Internet?” Remember, the NAT router technically makes the Internet request on your computer's behalf, and the NAT router is using a public IP address.
Two critical TCP/IP services you need to be aware of are Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS). Both are services that need to be installed on a server, and both provide key functionality to network clients.
A DHCP server is configured to provide IP configuration information to clients automatically (dynamically), in what is called a lease. It's called that because the information is not permanently granted to the client computer, and the client must periodically request a renewed lease or a new lease. The following configuration information is typically provided in a lease:
DHCP servers can provide a lot more than the items on the previous list, but these are the most common. The list of parameters that the DHCP server can provide is configured as part of a scope. DHCP servers can have one or more scopes to service clients on different subnets or network segments. The following items are typically included within the scope:
whatever.com
) for the client
to use.DHCP clients need to be configured to obtain an IP address automatically. This is done by going into the network card's properties and then the TCP/IP properties, as was shown previously in Figure 6.3.
When the client boots up, it will not have an IP address. To ask for one, it will send a DHCP DISCOVER broadcast out on the network. If a DHCP server is available to hear the broadcast, it will respond directly to the requesting client using the client's MAC address as the destination address. The process is shown in Figure 6.6.
FIGURE 6.6 The DHCP request process
Notice that the DHCP DISCOVER and DHCP REQUEST messages are broadcasts, which means two important things. First, every computer on the network segment receives and needs to process the broadcast message. It's like snail mail that's addressed to “the current resident” at an address, and the computer is compelled to read it. Excessive broadcasts can dramatically slow network performance. Second, broadcasts do not go through routers. Thus, if the client and the DHCP server are on opposite sides of a router, there will be a problem. There are two resolutions. First, make the router the DHCP server. Second, install a DHCP relay agent on the subnet that doesn't have the DHCP server. It will be configured with the address of the DHCP server, and it will forward the request directly to the DHCP server on behalf of the client.
Automatic Private IP Addressing (APIPA) is a TCP/IP standard used to automatically configure IP-based hosts that are unable to reach a DHCP server. APIPA addresses are in the 169.254.0.0–169.254.255.255 range, with a subnet mask of 255.255.0.0. If you see a computer that has an IP address beginning with 169.254, you know that it has configured itself.
Typically, the only time that you will see this is when a computer is supposed to receive configuration information from a DHCP server but for some reason that server is unavailable. Even while configured with this address, the client will continue to broadcast for a DHCP server so that it can be given a real address once the server becomes available.
APIPA is also sometimes known as zero configuration networking or address autoconfiguration. Both of these terms are marketing efforts, created to remove the perceived difficulty of configuring a TCP/IP network. While TCP/IP has generally been considered difficult to configure (compared to other protocols), APIPA can make it so that a TCP/IP network can run with no configuration at all! For example, say that you are setting up a small local area network that has no need to communicate with any networks outside of itself. To accomplish this, you can use APIPA to your advantage. Set the client computers to receive DHCP addresses automatically, but don't set up a DHCP server. The clients will configure themselves and be able to communicate with each other using TCP/IP. The only downside is that this will create a little more broadcast traffic on your network. This solution is only really effective for a nonrouted network of fewer than 100 computers. Considering that most networks today need Internet access, it's unlikely that you'll run across a network configuration like this.
DNS has one function on the network: to resolve hostnames to IP addresses. This sounds simple enough, but it has profound implications.
Think about using the Internet. You open your browser, and in the
address bar, you type the name of your favorite website, something like
www.google.com
, and
press Enter. The first question your computer asks is, “Who is that?”
Your machine requires an IP address to connect to the website. The DNS
server provides the answer, “That is 64.233.177.106.” Now that your
computer knows the address of the website you want, it's able to
traverse the Internet to find it.
Think about the implications of that for just a minute. We all probably use Google several times a day, but in all honesty how many of us know its IP address? It's certainly not something we are likely to have memorized. Much less, how could you possibly memorize the IP addresses of all the websites that you regularly visit? Because of DNS, it's easy to find resources. Whether you want to find Coca-Cola, Toyota, Amazon, or thousands of other companies, it's usually pretty easy to figure out how. Type in the name with a .com on the end of it, and you're usually right. The only reason this is successful is because DNS is there to perform resolution of that name to the corresponding IP address.
DNS works the same way on an intranet (a local network not attached
to the Internet) as it does on the Internet. The only difference is that
instead of helping you find www.google.com
, it may
help you find Jenny's print server or Joe's file server. From a
client-side perspective, all you need to do is configure the host with
the address of a legitimate DNS server and you should be good to go.
If a company wants to host its own website, it also needs to maintain two public DNS servers with information on how to get to the website. (Two servers are required for redundancy.) An advantage of using ISPs or web hosting companies to host the website is that they are then also responsible for managing the DNS servers.
Each DNS server has a database, called a zone file, which maintains records of hostname to IP address mappings. Within a zone file, you will see information that looks something like this:
mydomain.com. IN SOA ns.mydomain.com. ;Start of Authority record
mydomain.com. IN NS ns.mydomain.com. ;name server for mydomain.com
mydomain.com. IN MX mail.mydomain.com. ;mail server for mydomain.com
mydomain.com. IN A 192.168.1.25 ;IPv4 address for mydomain.com
IN AAAA 2001:db8:19::44 ;IPv6 address for mydomain.com
ns IN NS 192.168.1.2 ;IPv4 address for ns.mydomain.com
www IN CNAME mydomain.com. ;www.mydomain.com is an
alias for mydomain.com
www2 IN CNAME www ;www2.mydomain.com is another
alias for mydomain.com
mail IN A 192.168.1.26 ;IPv4 address for
mail.mydomain.com
Five columns of information are presented. From left to right, they are as follows:
www
.Type | Meaning |
---|---|
SOA | Start of Authority. It signifies the authoritative DNS server for that zone. |
NS | Name Server. It's the name or address of the DNS server for that zone. |
MX | Mail Exchange. It's the name or address of the email server. |
A | IPv4 host record. |
AAAA | Called “quad A,” it's the host record for IPv6 hosts. |
CNAME | Canonical Name. It's an alias; it allows multiple names to be assigned to the same host or address. |
TXT | Text record. Used to enter human-readable or machine-readable data. Today, text records are used primarily for email spam prevention and domain ownership verification. |
TABLE 6.6 Common DNS record types
The DNS server uses the zone file whenever a computer makes a query.
For example, if you were to ask this DNS server, “Who is mydomain.com
?” the response
would be 192.168.1.25. If you ask it, “Who is www.mydomain.com?
” it
would look and see that www
is an alias for mydomain.com
and provide the
same IP address.
If you are the DNS administrator for a network, you will be required to manage the zone file, including entering hostnames and IP addresses, as appropriate.
Email spam is a problem. The only people who don't agree with this are the spammers themselves. One of the tricks that spammers use is to spoof (or fake) the domain name they are sending emails from. DNS, through the use of TXT records, can help email servers determine if incoming messages are from a trusted source rather than a spoofed one.
Three standards used to battle email spam are Sender Policy Framework (SPF), Domain Keys Identified Mail (DKIM), and Domain-based Message Authentication, Reporting, and Conformance (DMARC). Each one can be used in a DNS TXT record to help thwart malicious users from using a company's domain name to send unauthorized emails.
SPF is the simplest of the three. It authenticates an email server based on its IP address. In an SPF TXT record, the administrator specifies all servers that are legitimate email senders for that domain, based on their IP addresses. Note that we're not referring to client computers sending email, but the addresses of the email servers that are legitimate. When a receiving email server gets a message, it sends a query back to the supposed sending server via the sending domain's return-path value, which is found in the email's headers. The sending domain's DNS servers will provide a list of approved senders' IP addresses. If the original sending machine's IP address is on the list, the email is accepted. If not, the email is rejected.
DKIM is a bit more involved, as it authenticates using encryption through a public-private key pair. Each email sent by the server includes a digital signature in the headers, which has been encrypted by the server's private key. When the receiving email server gets the message, it finds the server's registered public key, which is used to decrypt the message. If the key pair is incorrect, the message is flagged as a fake.
DMARC builds on both SPF and DKIM and essentially combines them together into one framework. It's not an authentication method per se—rather, it allows a domain owner to decide how they want email from their domain to be handled if it fails either an SPF or a DKIM authentication. Options include doing nothing (letting the email through), quarantining the email (i.e., sending it to a spam folder), or rejecting the email. In addition, it allows the domain owner to see where emails that claim to come from their domain actually originate from.
The Internet is really big—so big that there's no way one DNS server
could possibly manage all of the computer name mappings out there. The
creators of DNS anticipated this, and they designed it in a way that
reduces potential issues. For example, let's say that you are looking
for the website www.wiley.com
. When the DNS
server that your computer is configured to use is queried for a
resolution, it will first check its zone file to see if it knows the IP
address. If not, it then checks its cache to see if the record is in
there. The cache is a temporary database of recently resolved names and
IP addresses. If it still doesn't know the answer, it can query another
DNS server asking for help. The first server it will ask is called a
root server.
If you look back at the sample zone file shown earlier, you might
notice that the first few rows contained mydomain.com
. (the dot at
the end, the “trailing dot,” is intentional). The Internet name space is
designed as a hierarchical structure, and the dot at the end is the
broadest categorization, known as “the root.” At the next level of the
hierarchy are the top-level domains, such as .com, .net, .edu, .jp, and
others. Below that are the second-level domains, like Google, Microsoft,
and Yahoo. Below that there are subdomains (which are optional) and
hostnames. Moving down, the levels in the hierarchy get more and more
specific, until the name represents an exact host. Figure
6.7 shows an example.
FIGURE 6.7 Internet name hierarchy
There are 13 global root servers. All DNS servers should be
configured to ask a root server for help. The root server will return
the name of a top-level domain DNS server. The querying DNS server will
then ask that server for help. The process continues, as shown in Figure
6.8, until the querying DNS server finds a server that is able to
resolve the name www.wiley.com
. Then the
querying DNS server will cache the resolved name so that subsequent
lookups are faster. The length of time that the name is held in cache is
configurable by the DNS administrator.
FIGURE 6.8 The DNS name resolution process
The present incarnation of TCP/IP that is used on the Internet was originally developed in 1973. Considering how fast technology evolves, it's pretty amazing to think that the protocol still enjoys immense popularity about 50 years later. This version is known as IPv4.
There are a few problems with IPv4, though. One is that we're quickly running out of available network addresses, and the other is that TCP/IP can be somewhat tricky to configure.
If you've dealt with configuring custom subnet masks, you may nod your head at the configuration part, but you might be wondering how we can run out of addresses. After all, IPv4 has 32 bits of addressing space, which allows for nearly 4.3 billion addresses! With the way it's structured, only about 250 million of those addresses are actually usable, and all of those are pretty much spoken for.
A new version of TCP/IP has been developed, called IPv6. Instead of a 32-bit address, it provides for 128-bit addresses. That provides for 3.4 × 1038 addresses, which theoretically should be more than enough that they will never run out globally. (Famous last words, right?)
IPv6 also has many standard features that are optional (but useful) in IPv4. While the addresses may be more difficult to remember, the automatic configuration and enhanced flexibility make the new version sparkle compared to the old one. Best of all, it's backward compatible with and can run on the computer at the same time as IPv4, so networks can migrate to IPv6 without a complete restructure.
Understanding the IPv6 addressing scheme is probably the most challenging part of the protocol enhancement. The first thing you'll notice is that, of course, the address space is longer. The second is that IPv6 uses hexadecimal notation instead of the familiar dotted decimal of IPv4. Its 128-bit address structure looks something like what is shown in Figure 6.9.
FIGURE 6.9 IPv6 address
The new address is composed of eight 16-bit fields, each represented by four hexadecimal digits and separated by colons. The letters in an IPv6 address are not case sensitive. IPv6 uses three types of addresses: unicast, anycast, and multicast. A unicast address identifies a single node on the network. An anycast address refers to one that has been assigned to multiple nodes. A packet addressed to an anycast address will be delivered to the closest node. Sometimes you will hear this referred to as one-to-nearest addressing. Finally, a multicast address is one used by multiple hosts, and is used to communicate to groups of computers. IPv6 does not employ broadcast addresses. Multicasts handle that functionality. Each network interface can be assigned one or more addresses.
Just by looking at unicast and anycast addresses, it's impossible to tell the difference between them. Their structure is the same; it's their functionality that's different. The first four fields, or 64 bits, refer to the network and subnetwork. The last four fields are the interface ID, which is analogous to the host portion of the IPv4 address. Typically, the first 56 bits within the address are the routing (or global) prefix, and the next 8 bits refer to the subnet ID. It's also possible to have shorter routing prefixes, though, such as 48 bits, meaning that the subnet ID will be longer.
The Interface ID portion of the address can be created in one of four ways. It can be created automatically using the interface's MAC address, procured from a DHCPv6 server, assigned randomly, or configured manually.
Multicast addresses can take different forms. All multicast addresses use the first 8 bits as the prefix.
In IPv4, the subnet mask determines the length of the network portion of the address. The network address was often written in an abbreviated form, such as 169.254.0.0/16. The /16 indicates that the first 16 bits are for the network portion and that corresponds to a subnet mask of 255.255.0.0. While IPv6 doesn't use a subnet mask, the same convention for stating the network length holds true. An IPv6 network address could be written as 2001:db8:3c4d::/48. The number after the slash indicates how many bits are in the routing prefix.
Because the addresses are quite long, there are a few ways that you can write them in shorthand; in the world of IPv6, it's all about eliminating extra zeroes. For example, take the address 2001:0db8:3c4d:0012:0000:0000:1234:56ab. The first common way to shorten it is to remove all of the leading zeroes. Thus it could also be written as 2001:db8:3c4d:12:0:0:1234:56ab. The second accepted shortcut is to replace consecutive groups of zeroes with a double colon. So now the example address becomes 2001:db8:3c4d:12::1234:56ab. It's still long, but not quite as long as the original address.
A fairly common occurrence today is a mixed IPv4-IPv6 network. As mentioned earlier, IPv6 is backward compatible. In the address space, this is accomplished by setting the first 80 bits to 0, the next 16 bits to 1, and the final 32 bits to the IPv4 address. In IPv6 format, the IPv4 address looks something like ::ffff:c0a8:173. You will often see the same address written as ::ffff:192.168.1.115 to enable easy identification of the IPv4 address.
There are a few more addresses you need to be familiar with. In IPv4,
the autoconfiguration (APIPA) address range was 169.254.0.0/16. IPv6
accomplishes the same task with the link local address
fe80::/10. Every IPv6-enabled interface is required to have a link local
address, and they are nonroutable. The IPv4 loopback address of
127.0.0.1 has been replaced with ::1/128 (typically written as just
::1). Global addresses (for Internet use) are 2000::/3, and multicast
addresses are FF00::/8. Figure 6.10 shows the output of an
ipconfig
command, and you can see the IPv4 address
configuration as well as the IPv6 link local address. Table
6.7 summarizes the IPv6 address ranges you should be familiar
with.
Address | Use |
---|---|
0:0:0:0:0:0:0:0 | Equals ::, and is equivalent to 0.0.0.0 in IPv4. It usually means that the host is not configured. |
0:0:0:0:0:0:0:1 | Also written as ::1. Equivalent to the loopback address of 127.0.0.1 in IPv4. |
2000::/3 | Global unicast address range for use on the Internet. |
FC00::/7 | Unique local unicast address range. |
FE80::/10 | Link local unicast range. |
FF00::/8 | Multicast range. |
TABLE 6.7 IPv6 address ranges
FIGURE 6.10 ipconfig output with IPv4 and IPv6 addresses
As you learned earlier in this chapter, the subnet mask on an IPv4 network determines the network address. Said differently, it's the mechanism by which networks are defined. Computers configured to be on different networks talk to each other through a router, which sends packets from one network to another. Therefore, the router is the physical device that divides logical networks from each other. In addition to physical and logical networks, one additional term you need to be familiar with is the virtual network. There are two types of virtual networks we'll cover here: virtual local area networks and virtual private networks.
One of the limitations of typical routed network configurations is that computers on the same side of the router can't easily be broken into multiple networks and still communicate with each other. This is because if a sending computer knows that the destination IP address is on another network, it sends its data directly to the router—its default gateway. Other computers on the physical segment will ignore the message because it's not addressed to them. The router then takes a look at the real destination address and sends it out one of its ports, other than the one it came in on, to reach the destination network.
The virtual local area network (VLAN) is designed to help segment physical networks into multiple logical (virtual) networks. You may recall from Chapter 5 that VLANs are created by using a managed switch. The switch uses Spanning Tree Protocol (STP) to manage configurations and to ensure that there are no infinite network loops. (That's when data gets sent out and bounces between two or more switches, never getting to a destination. Loops are bad.) A VLAN can provide the following benefits:
Figure 6.11 shows two potential VLAN configurations. In the first one, computers on one switch are assigned to different VLANs. In the second, the concept is extended to include multiple switches.
One of the questions often asked is, “What's the difference between a VLAN and a subnet?” First, let's look at the key similarity—both are capable of breaking up broadcast domains on a network, which helps reduce network traffic. Also, if you are using both, the recommended configuration is that subnets and VLANs have a 1:1 relationship, one subnet per VLAN. You can configure multiple subnets to be on one VLAN—it's called a super scope—but it gets trickier to manage.
Beyond separating broadcast domains, VLANs and subnets are almost entirely different. Recall that VLANs are implemented on switches, and routers are needed to subnet. Consequently, VLANs work at Layer 2 of the OSI model and deal with physical MAC addresses. Routers work at Layer 3, and work with logical IP addresses.
FIGURE 6.11 Two VLAN configurations
As networks grow beyond simple physical limitations (such as an office or a building) to include clients from all over the world, the need to secure data across public connections becomes paramount. One of the best methods of addressing this is to tunnel the data. Tunneling sends private data across a public network by placing (encapsulating) that data into other packets. Most tunnels are a virtual private network (VPN). A sample VPN is shown in Figure 6.12.
A VPN is a secure (private) network connection that occurs through a public network. The private network provides security over an otherwise unsecure environment. VPNs can be used to connect LANs together across the Internet or other public networks, or they can be used to connect individual users to a corporate network. This is a great option for users who work from home or travel for work. With a VPN, the remote end appears to be connected to the network as if it were connected locally. From the server side, a VPN requires dedicated hardware or a software package running on a server or router. Clients use specialized VPN client software to connect, most often over a broadband Internet link. Windows 10 comes with its own VPN client software (shown in Figure 6.13) accessible through Start ➢ Settings ➢ Network & Internet ➢ VPN, as do some other operating systems, and many third-party options are also available.
FIGURE 6.12 A VPN
FIGURE 6.13 Windows 10 VPN client
In this chapter, you learned about the protocol suite used on the Internet, TCP/IP. It's by far the most common protocol in worldwide use today. We started with TCP/IP structure. It's a modular suite that follows the DoD model, with different protocols performing unique tasks at each layer. We looked at individual protocols and their functions at the Internet, Host-to-host, and Process/Application layers. We also discussed ports and well-known port numbers for common protocols.
Next you learned about IP addressing. We started with a brief tutorial on converting binary numbers to decimal to make them easier to read. Then we looked at the different address classes, CIDR, public versus private IP addresses, and NAT. Then we followed with details on DHCP, APIPA, and DNS. Each of these services and concepts plays a unique role in managing TCP/IP on your network.
Next, you learned about the next generation of TCP/IP, IPv6. We talked about the seemingly infinite number of addresses as well as the fact that addresses are written in hexadecimal, which might take some getting used to—even for experienced technicians. Finally, we looked at working with IPv6 addresses, including shorthand notation and special addresses to be aware of.
We finished the chapter by looking at two types of virtual networks: VLANs and VPNs.
Understand how IPv4 addressing works. IP addresses are 32-bit addresses written as four octets in dotted-decimal notation, such as 192.168.5.18. To communicate on an IP network, a host also needs a subnet mask, which may look something like 255.255.255.0. If the host needs to communicate outside the local network, it also needs a default gateway, which is normally the internal address of the router.
Addresses can be static (manual) or dynamic (from a DHCP server). If a DHCP server is not available, a network client may use an APIPA address starting with 169.254.
Be able to identify IP address classes. Know how to identify Class A, B, and C IP addresses. Class A addresses will have a first octet in the 1 to 126 range. B is from 128 to 191, and C is from 192 to 223.
Understand the differences between TCP and UDP. TCP is a connection-based protocol that attempts to guarantee delivery. UDP is connectionless, which makes it a bit faster, but it doesn't guarantee packet delivery.
Know common TCP/IP ports. Some common protocol and port pairings that you should know are FTP (20 and 21), SSH (22), Telnet (23), SMTP (25), DNS (53), DHCP (67, 68), TFTP (69), HTTP (80), POP3 (110), NetBIOS/NetBT (137, 139), IMAP (143), SNMP (161, 162), LDAP (389), HTTPS (443), SMB/CIFS (445), and RDP (3389).
Know the private IP address ranges. Private IP addresses will be in one of three ranges: 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16.
Know what NAT does. Network Address Translation (NAT) translates private, nonroutable IP addresses into public IP addresses. It allows computers on a private network to access the Internet.
Know what DHCP does. A DHCP server provides IP addresses and configuration information to network hosts. The configuration is provided as a lease, and all lease information is configured in a scope on the DHCP server. Clients that need to have the same address at all times can be configured using a reservation, which grants an address based on a MAC address.
Know about the APIPA range. IP addresses in the 169.254.0.0/16 range are APIPA addresses.
Know what DNS does. A DNS server resolves hostnames to IP addresses.
Be familiar with common DNS address classes. Addresses include A (for IPv4) and AAAA (IPv6), MX (mail exchange), and TXT (text). Special TXT addresses to help combat spam are DKIM, SPF, and DMARC.
Understand how IPv6 addressing works. IPv6 addresses are 128-bit addresses written as eight fields of four hexadecimal characters, such as 2001:0db8:3c4d:0012: 0000:0000:1234:56ab. Using shorthand conventions, this address can also be written as 2001:db8:3c4d:12::1234:56ab.
Addresses can be static or dynamic. APIPA does not exist in IPv6 but has been replaced by a link local address.
Know the difference between unicast, anycast, and multicast in IPv6. Unicast addresses are for a single node on the network. Anycast can represent a small group of systems. An anycast message will be delivered to the closest node. Multicast messages are delivered to all computers within a group.
Recognize the special classes of IPv6 addresses. The loopback address is ::1. Global unicast addresses are in the 2000::/3 range. Unique local unicast addresses are in the FC00::/7 range, link local addresses are FE80::/10, and FF00::/8 addresses are multicast.
Understand the differences between a VLAN and a VPN. A virtual local area network (VLAN) is a logical network configured through a managed switch. A virtual private network (VPN) is a secure point-to-point connection over a public network.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
Match the following protocols (services) below to their respective ports in the table:
Protocol (service) | Port(s) |
---|---|
20, 21 | |
22 | |
23 | |
25 | |
53 | |
67, 68 | |
69 | |
80 | |
110 | |
137–139 | |
143 | |
161, 162 | |
389 | |
443 | |
445 | |
3389 |
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Over the last two chapters, we've talked a lot about foundational networking knowledge. We've discussed theoretical networking models, physical topologies, cables and connectors, and connectivity devices. We also spent an entire chapter devoted to the most common protocol of all, TCP/IP. The one critical technology that we haven't covered yet is wireless networking.
Because of the unique technology of wireless networking and its huge popularity, it feels appropriate to talk about it as a separate entity. That said, it's important to remember that wireless networking is just like wired networking, only without the wires. You still need to figure out how to get resources connected to each other and give the right people access while keeping the bad people at bay. You're now just playing the game with slightly different rules and many new challenges.
We'll start this chapter with the last of our key networking “theory” discussions, this time on the categories of wireless networking standards. From there, we'll move on to picking out an Internet connection type and setting up and configuring small networks. This is really where the rubber meets the road. Understanding the theory and technical specifications of networking is fine, but the true value in all this knowledge comes in being able to make good recommendations and implement the right network for your client's needs.
Wireless networking is so common today that it's taken for granted. When first introduced, wireless networking was slow and unreliable, but it's now fast and pretty stable, not to mention convenient. It seems like everywhere you go there are Internet cafes or fast-food restaurants with wireless hotspots. Nearly every mobile device sold today has Internet capabilities. No matter where you go, you're likely just seconds away from being connected to the Internet.
The most common term you'll hear thrown around referring to wireless networking today is Wi-Fi. While the term was originally coined as a marketing name for 802.11b, it's now used as a nickname referring to the family of IEEE 802.11 standards. That family comprises the primary wireless networking technology in use today, but other wireless technologies are out there, too. We'll break down wireless technologies into four groups: 802.11, Bluetooth, long-range fixed wireless, and radio frequency. Each technology has its strengths and weaknesses and fills a computing role.
As a technician, it will fall to you to provide users with access to networks, the Internet, and other wireless resources. You must make sure that their computers and mobile devices can connect, that users can get their email, and that downtime is something that resides only in history books. To be able to make that a reality, you must understand as much as you can about wireless networking and the technologies discussed in the following sections.
In the United States, wireless LAN (WLAN) standards are created and managed by the Institute of Electrical and Electronics Engineers (IEEE). The most commonly used WLAN standards are in the IEEE 802.11 family. Eventually, 802.11 will likely be made obsolete by newer standards, but that is some time off. IEEE 802.11 was ratified in 1997 and was the first standardized WLAN implementation. There are over 20 802.11 standards defined, but you will only hear a few commonly mentioned: 802.11a, b, g, n, ac, and ax. As previously mentioned, several wireless technologies are on the market, but 802.11 is the one currently best suited for WLANs.
In concept, an 802.11 network is similar to an Ethernet network, only wireless. At the center an Ethernet network is a connectivity device, such as a hub, switch, or router, and all computers are connected to it. Wireless networks are configured in a similar fashion, except that they use a wireless router or wireless access point instead of a wired connectivity device. In order to connect to the wireless hub or router, the client needs to know the service-set identifier (SSID) of the network. SSID is a fancy term for the wireless network's name. Wireless access points may connect to other wireless access points, but eventually they connect back to a wired connection with the rest of the network.
802.11 networks use the Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) access method instead of Ethernet's Carrier Sense Multiple Access/Collision Detection (CSMA/CD). Packet collisions are generally avoided, but when they do happen, the sender will need to wait a random period of time (called a back-off time) before transmitting again.
Since the original 802.11 standard's publication in 1997, several upgrades and extensions have been released. The primary characteristics that define them are speed, maximum distance, frequency (which includes channels), and modulation technique.
Speed and distance are networking concepts that you should be familiar with. Frequency is the audio range in which the technology broadcasts—fortunately the waves are far outside of the range of human hearing. Channels, which we will cover in more depth later, are sub-divisions within a frequency. Finally, there is modulation, which refers to how the computer converts digital information into signals that can be transmitted over the air. There are three types of wireless modulation used today:
The mathematics and theories of these transmission technologies are beyond the scope of this book and far beyond the scope of this exam. We bring them up because as we talk about different Wi-Fi standards, we might note that they use such-and-such modulation. Knowing the basics of how they are different helps you understand why those standards are not compatible with each other. If the sending system is using FHSS and the receiving system is expecting DSSS (or OFDM), they won't be able to talk to each other. With that, let's dive into each of the 802.11 standards you need to know.
The original 802.11 standard was ratified in 1997 and defines WLANs transmitting at 1 Mbps or 2 Mbps bandwidths using the 2.4 GHz frequency spectrum. The frequency-hopping spread spectrum (FHSS) and direct-sequence spread spectrum (DSSS) modulation techniques for data encoding were included in this standard.
There were never any 802.11 (with no letter after the 11) devices released—it's a standards framework. All in-market versions of 802.11 have a letter after their name to designate the technology.
The 802.11a standard provides WLAN bandwidth of up to 54 Mbps in the 5 GHz frequency spectrum. The 802.11a standard also uses a more efficient encoding system, orthogonal frequency division multiplexing (OFDM), rather than FHSS or DSSS.
This standard was ratified in 1999, but devices didn't hit the market until 2001. Thanks to its encoding system, it was significantly faster than 802.11b (discussed next) but never gained widespread popularity. They were ratified as standards right around the same time, but 802.11b devices beat it to market and were significantly cheaper. It would be shocking to see an 802.11a device in use today.
The 802.11b standard was ratified in 1999 as well, but device makers were much quicker to market, making this the de facto wireless networking standard for several years. 802.11b provides for bandwidths of up to 11 Mbps (with fallback rates of 5.5, 2, and 1 Mbps) in the 2.4 GHz range. The 802.11b standard uses DSSS for data encoding. You may occasionally still see 802.11b devices in use, but they are becoming rare. If you encounter them, encourage the users to upgrade to something faster. They will appreciate the increase in speed!
Ratified in 2003, the 802.11g standard provides for bandwidths of 54 Mbps in the 2.4 GHz frequency spectrum using OFDM or DSSS encoding. Because it operates in the same frequency and can use the same modulation as 802.11b, the two standards are compatible. Because of the backward compatibility and speed upgrades, 802.11g replaced 802.11b as the industry standard for several years, and it is still somewhat common today.
As we mentioned, 802.11g devices are backward compatible with 802.11b devices, and both can be used on the same network. That was initially a huge selling point for 802.11g hardware and helped it gain popularity very quickly. However, there are some interoperability concerns of which you should be aware. 802.11b devices are not capable of understanding OFDM transmissions; therefore, they are not able to tell when the 802.11g access point is free or busy. To counteract this problem, when an 802.11b device is associated with an 802.11g access point, the access point reverts back to DSSS modulation to provide backward compatibility. This means that all devices connected to that access point will run at a maximum of 11 Mbps. To optimize performance, administrators would upgrade to all 802.11g devices and configure the access point to be G-only.
One additional concept to know about when working with 2.4 GHz wireless networking is channels. We've said before that 802.11b/g works in the 2.4 GHz range. Within this range, the Federal Communications Commission (FCC) has defined 14 different 22 MHz communication channels. This is analogous to a 14-lane audio wave highway, with each lane being 22 MHz wide. An illustration of this is shown in Figure 7.1.
FIGURE 7.1 2.4 GHz communication channels
Although 14 channels have been defined for use in the United States, you're allowed to configure your wireless networking devices only to the first 11. When you install a wireless access point and wireless NICs, they will all auto-configure their channel, and this will probably work okay for you. If you are experiencing interference, changing the channel might help. And if you have multiple overlapping wireless access points, you will need to have nonoverlapping channels to avoid communications problems. (We'll talk about this more in the “Installing and Configuring SOHO Networks” section later in this chapter.) Two channels will not overlap if there are four channels between them. If you need to use three nonoverlapping channels, your only choices are 1, 6, and 11. Notice in Figure 7.1 that those three channels are highlighted.
Continuing the evolution in Wi-Fi is 802.11n, which was ratified in 2010. The standard claims to support bandwidth up to 600 Mbps, but in reality the typical throughput is about 300–450 Mbps. That's still pretty fast. It works in both the 2.4 GHz and 5 GHz ranges.
802.11n achieves faster throughput in a couple of ways. Some of the enhancements include the use of wider 40 MHz channels, multiple-input multiple-output (MIMO), and channel bonding. Remember how 802.11g uses 22 MHz channels? 802.11n combines two channels to double (basically) the throughput. Imagine being able to take two garden hoses and combine them into one bigger hose. That's kind of what channel bonding does. MIMO means using multiple antennas rather than a single antenna to communicate information. (802.11n devices can support up to eight antennas, or four streams, because each antenna only sends or receives.) Channel bonding also allows the device to communicate simultaneously at 2.4G Hz and 5 GHz and bond the data streams, which increases throughput.
One big advantage of 802.11n is that it is backward compatible with 802.11a/b/g. This is because 802.11n is capable of simultaneously servicing 802.11b/g/n clients operating in the 2.4 GHz range as well as 802.11a/n clients operating in the 5 GHz range.
FIGURE 7.2 Channel availability in the 5 GHz spectrum
Technology is always marching forward and getting faster and cheaper, and wireless networking is no different. In January 2014, 802.11ac was approved, and you will often see it marketed as Wi-Fi 5. In many ways, it's a more powerful version of 802.11n in that it carries over many of the same features while adding in only a few new ones. It's the first commercial wireless standard that claims to offer the speed of Gigabit Ethernet.
802.11n introduced channel bonding and MIMO, and 802.11ac takes those concepts further. Instead of bonding two channels, 802.11ac can bond up to eight for a 160 MHz bandwidth. This results in a 333-percent speed increase. And 802.11ac greatly enhances MIMO. First, it doubles the MIMO capabilities of 802.11n to eight streams, resulting in another 100 percent speed increase. Second, it introduces multi-user MIMO (MU-MIMO) for up to four clients. MU-MIMO allows multiple users to use multiple antennae for communication simultaneously, whereas MIMO only allowed for one such connection at a time on a device.
The theoretical maximum speed of 802.11ac is 6.9 Gbps, but most 802.11ac devices peak at about 1.3 Gbps. Common maximum throughput is just under Gigabit Ethernet speeds, at around 800 Mbps. You might see devices in the marketplace that claim to offer speeds over 2 Gbps, but the reality is that you're unlikely to get those speeds in anything less than pristine, laboratory-like conditions with all top-of-the-line hardware. In other words, it's fast, but don't count on it being that fast.
The most important new feature of 802.11ac is beamforming, which can allow for range increases by sending the wireless signal in the specific direction of the client, as opposed to broadcasting it omnidirectionally. Beamforming helps overcome the fact that the range for a 5 GHz signal is inherently shorter than one for a 2.4 GHz signal. Not all 802.11ac routers support beamforming, however, so you might have some range limitations, depending on your hardware. And even if the router does support the technology, the maximum distance still won't be any more than what you will get out of 802.11n.
The newest version of Wi-Fi was released in 2019 and is known as Wi-Fi 6. The technical specification is 802.11ax. It gives network users what they crave—more speed. It also allows for more simultaneous users of any given access point, which is a big bonus as well.
Speed is, of course, the major reason that newer versions of technology get produced and accepted. Wi-Fi 6 has a few other advantages over its predecessor, though. Here's a list of enhancements versus Wi-Fi 5:
Better Connection Management Wi-Fi 6 introduces a new modulation technique called Orthogonal Frequency Division Multiple Access (OFDMA), which is an enhancement over the previously used OFDM. While OFDM was fast, it had a limitation that it could only transmit to one recipient at a time. OFDMA can handle communications with several clients at once.
In addition, the Wi-Fi 5 implementation of MU-MIMO was only for downlink connections—the router could send signals to multiple receivers. With Wi-Fi 6, MU-MIMO works for uplink connections, too, meaning the router can also simultaneously receive data from multiple clients at once. This lowers latency (time spent waiting) and allows for more simultaneous devices on one network.
Less Co-channel Interference As we noted with Wi-Fi 5, the channel bonding needed to achieve higher speeds was problematic in that it severely limited the number of nonoverlapping channels available. A network with multiple wireless access points should have their ranges overlap so users never hit Wi-Fi dead spots. Those ranges need to be on different, nonoverlapping channels to avoid interference. If there are only one or two channel options, then there's a problem.
In Wi-Fi 6, a feature called Basic Service Set (BSS) coloring adds a field to the wireless frame that distinguishes it from others, reducing the problems of co-channel interference. Specifically, the 802.11ax access point has the ability to change its color (and the color of associated clients) if it detects a conflict with another access point on the same channel. It's a very cool feature sure to be underappreciated by network users, but not network administrators.
With all of these enhancements, it might be tempting to run out and upgrade all of your wireless routers to Wi-Fi 6. You could, but there are three reasons to think long and hard about it before you do:
Table 7.1 summarizes the 802.11 standards we discussed here. You'll notice that 802.11ac operates in the 5 GHz range and uses OFDM modulation, meaning that it is not backward compatible with 802.11b. That's okay, though—as we said earlier, it's probably best to retire those old and slow devices anyway. Many 802.11ac wireless routers are branded as dual-band, meaning they can operate in the 2.4 GHz frequency as well for support of older 802.11g and 802.11n devices. Keep in mind, though, that dual-band 802.11ac routers can only operate in one frequency at a time, which slows performance a bit. If you are running a mixed environment and want to upgrade to an 802.11ac router, check the specifications carefully to see what it supports.
Type | Frequency | Maximum throughput | Modulation | Indoor range | Outdoor range |
---|---|---|---|---|---|
— | 2.4 GHz | 2 Mbps | FHSS/DSSS | 20 meters | 100 meters |
a | 5 GHz | 54 Mbps | OFDM | 35 meters | 120 meters |
b | 2.4 GHz | 11 Mbps | DSSS | 40 meters | 140 meters |
g | 2.4 GHz | 54 Mbps | DSSS/OFDM | 40 meters | 140 meters |
n | 5 GHz/2.4 GHz | 600 Mbps | OFDM/DSSS | 70 meters | 250 meters |
ac | 5 GHz | 6.9 Gbps | OFDM | 35 meters | 140 meters |
ax | 5 GHz/2.4 GHz | 9.6 Gbps | OFDMA | 35 meters | 140 meters |
TABLE 7.1 802.11 standards
If you think about a standard wired network and the devices required to make the network work, you can easily determine what types of devices are needed for 802.11 networks. Just as you do on a wired network, you need a wireless network card and some sort of central connectivity device.
Wireless network cards come in a variety of shapes and sizes, including PCI, PCIe, and USB. As for connectivity devices, the most common are wireless routers (as shown in Figure 7.3) and a type of switch called a wireless access point (WAP). WAPs look nearly identical to wireless routers and provide central connectivity like wireless routers, but they don't have nearly as many features. The main one most people worry about is Internet connection sharing. You can share an Internet connection among several computers using a wireless router but not with a WAP.
Most wireless routers and WAPs also have wired ports for RJ-45 connectors. The router shown in Figure 7.3 has four wired connections—in the figure you can't see them all, but they're available. The connected cable in this example is plugged into the port labeled Internet, which in this case goes to the DSL modem providing Internet access. We'll talk much more about installing and configuring a wireless router in the “Configuring Wireless Routers and Access Points” section later in this chapter.
FIGURE 7.3 Wireless router
We introduced Bluetooth in Chapter 5, “Networking Fundamentals,” in the context of a wireless personal area network (PAN). It was released in 1998 and is an industry standard, much like 802.11. However, Bluetooth is not designed to be a WLAN and therefore does not directly compete with Wi-Fi. In other words, it's not the right technology to use if you want to set up a wireless network for your office. It is, however, a great technology to use if you have wireless devices that you want your computer to be able to communicate with. Examples include smartphones, mice, keyboards, headsets, and printers.
Nearly every laptop comes with built-in Wi-Fi capabilities, and most also come Bluetooth-enabled. If not, install a USB Bluetooth adapter to have it communicate with Bluetooth devices. It's a safe bet to say that all smartphones and other mobile devices today support Bluetooth.
Several Bluetooth standards have been introduced. Newer versions have increased speed and compatibility with technologies such as Wi-Fi, LTE, IPv6, and Internet of Things (IoT) devices, along with reduced power requirements and increased security. The newest version is Bluetooth v5.3, which was introduced in 2021. Table 7.2 provides a high-level overview of the major versions and some key features.
Version | Basic Rate (BR) | Enhanced Data Rate (EDR) | High Speed (HS) | Low Energy (LE) | Slot Availability Masking (SAM) |
---|---|---|---|---|---|
1.x | X | ||||
2.x | X | X | |||
3.x | X | X | X | ||
4.x | X | X | X | X | |
5.x | X | X | X | X | X |
TABLE 7.2 Bluetooth major versions and features
Now let's talk about what some of these features mean:
Bluetooth v5 has several new features over its predecessor, v4.2. Along with introducing SAM, and better security, it is capable of doubling the throughput and achieving four times the maximum distance, up to about 240 meters (800 feet) outdoors with line-of-sight, when in LE mode. That drops to about 40 meters (133 feet) indoors. (Remember, when distances are stated, that's the theoretical maximum under ideal conditions.) It can't do both at once, though. It can increase throughput at a shorter distance, or it can go up to longer distances at a lower data rate. It's the first version that truly challenges other IoT technologies in that space. Subsequent improvements on v5 (v5.1, v5.2, and v5.3) have added features such as Angle of Arrival (AoA) and Angle of Departure (AoD), used to locate and track devices, better caching, improved LE power control and LE audio, and enhanced encryption. All versions of Bluetooth are backward compatible with older versions. Of course, when using mixed versions, the maximum speed will be that of the older device.
One of the key features of Bluetooth networks is their temporary nature. With Wi-Fi, you need a central communication point, such as a WAP or router. Bluetooth networks are formed on an ad hoc basis, meaning that whenever two Bluetooth devices get close enough to each other, they can communicate directly with each other. This dynamically created network is called a piconet. Bluetooth-enabled devices can communicate with up to seven other devices in one piconet. One device will be the primary, and the others will be secondaries. The primary controls communication between the devices. Multiple piconets can be combined together to form a scatternet, and it's possible for a primary of one piconet to be a secondary in another. In a scatternet, one of the Bluetooth devices serves as a bridge between the piconets.
As mentioned earlier, Bluetooth devices have classically been computer and communications peripherals—keyboards, mice, headsets, and printers being common. Of course, smartphones and other mobile devices support Bluetooth as well. With the newest versions, we may see more IoT devices with Bluetooth capabilities as well.
One such device is a Bluetooth beacon, which is a small hardware transmitter that uses Bluetooth LE. It's broadcast only and transmits its unique identifier to nearby Bluetooth-enabled devices, such as smartphones. It can be used to send information such as marketing materials or coupons to someone with a smartphone in the vicinity of a product, or as a short-range navigation system.
There are four classes of Bluetooth devices, which differ in their maximum transmission range and power usage; the specifications are shown in Table 7.3. Most computer peripheral Bluetooth devices are Class 2 devices, which have a range of 10 meters (33 feet) and power usage of 2.5 mW. Most headsets are also Class 2, but some Class 1 headsets exist as well. Right now you might be confused, recalling from the standards discussion that Bluetooth v5 has a maximum range of 240 meters outdoors. That is for a Class 1 device running in LE mode only; devices running in classic BR or EDR modes will have shorter ranges.
Class | Distance | Power usage |
---|---|---|
1 | 100 meters | 100 mW |
2 | 10 meters | 2.5 mW |
3 | 1 meters | 1 mW |
4 | 0.5 meters | 0.5 mW |
TABLE 7.3 Bluetooth device classes and specifications
Bluetooth and Wi-Fi are short-range networking technologies. The most common implementations of Bluetooth extend about 10 meters, and besides, Bluetooth isn't designed for WLAN communications. The newest generations of Wi-Fi can transmit over 100 meters or so, but that's under ideal conditions, and at those longer distances, bandwidth is greatly reduced. In situations where the distance is too far for Wi-Fi but high-speed wireless network connectivity is needed, long-range fixed wireless could be the solution. Examples could include networking from building to building in a city or on a campus, bringing Internet access to a remote residence on a lake or in the mountains where wires can't be run, or providing Internet to boats and ships.
Long-range fixed wireless is a point-to-point technology that uses directional antennas to send and receive network signals. An antenna typically looks like a small satellite dish, usually only about 1 meter wide, and can usually send and receive signals for 10 to 20 kilometers. Different dishes will support different technologies. For example, some may support Wi-Fi 5 or 6, whereas others may support those plus cellular networking, too. As the technology is point-to-point, the sending and receiving devices must be pointed at each other—misalignment will cause network failure—and obstructions such as trees or other buildings will cause problems, too.
As you learned in the discussion on 802.11, Wi-Fi operates on the unlicensed frequencies of 2.4 GHz and 5 GHz. In 2020, 6 GHz was opened up to Wi-Fi in the United States as well. Other unlicensed frequencies include 900 MHz and 1.8 GHz and are used by devices such as walkie-talkies and cordless telephones.
The good news about unlicensed frequencies is that they are free to use. The bad news is that since everyone can use them, they are more susceptible to interference from other signals or eavesdropping. Take the Wi-Fi in your home, for example. If you live nearby other people, it's certain that you can see the Wi-Fi networks belonging to several of your neighbors. Hopefully they (and you) have secured them, but the signals are visible. The same concept applies to long-range fixed wireless. The difference is that here, the beams are directional, fairly narrow, and pointed at a specific receiver. For someone to eavesdrop, they would need to get within the range of the field, which could be challenging but is not impossible.
Other frequencies are licensed frequencies, meaning that use of them is granted by a governmental body. In the United States, it's the FCC. Think of AM and FM radio, for example. To operate on those frequencies, radio stations must be granted permission. Some companies may choose to pursue a licensed frequency for long-range fixed wireless as well. If access is granted, then that company is the only one that can use the frequency within a certain geographical area. This type of setup is uncommon.
In addition to network signals, power can be transmitted over long-range fixed wireless as well. It's analogous to the Power over Ethernet (PoE) technology you learned about in Chapter 5, but of course it's wireless. A common name for the technology is wireless power transfer (WPT).
The transmitting station generates the power and then transmits it via microwave or laser light toward the receiver. The receiving station gets the signal and converts it back to electricity. It's the exact same principle used by other radio transmissions. The difference is that in radio (for example, terrestrial FM radio) the power produced and received is miniscule. In this application, the power transmitted is much greater. A small-scale example of wireless power transfer is wireless charging pads for mobile devices.
Efficiency is an issue with current WPT implementations. The amount of energy lost can vary; some commercial providers claim to have 70 percent efficiency, which is far lower than copper cables. They may need to improve on that before it becomes commercially viable in large scales. Still, this is an exciting emerging field that could have significant implications for how power gets generated and transmitted in the future. WPT technology is currently regulated in the United States by the FCC.
The final group of networking standards we will look at is radio frequency. Technically speaking, all of the networking technologies we've discussed so far in this chapter use radio frequencies to communicate, so perhaps we're taking a bit of creative license here. The two technologies in this section are radio frequency identification and a subset of it called near field communication.
Radio frequency identification (RFID) is a communications standard that uses radio waves to facilitate communication. There are three types of RFID, based on the frequency used. This also affects the maximum distance that the waves can travel. Table 7.4 shows the three versions of RFID.
Name | Frequency | Distance |
---|---|---|
Low frequency (LF) | 125–134 kHz | 10 centimeters |
High frequency (HF) | 13.56 MHz | 30 centimeters |
Ultra-high frequency (UHF) | 856–960 MHz | 100 meters |
TABLE 7.4 RFID frequencies and characteristics
The primary purpose of RFID is to identify items. Those items can be inventory in a store or warehouse, people, or even fast-moving things, such as race cars. An RFID system is made of three components: tag, reader, and antenna. Let's discuss what each component does:
Tag An RFID tag is fastened to the item that needs to be tracked. This can be temporary, such as an access badge an employee carries around, or it can be permanently affixed to an item. The RFID tag contains identifying information, such as an employee ID, product number, inventory number, or the like.
There are passive RFID tags and active RFID tags. Passive tags do not have a power source and draw their power from radio waves emitted by the RFID reader. This works only across short distances, typically about 25 meters or less. An active tag has its own power source (often a small battery) and may have its own antenna as well. Because it has power to generate a signal, the range for active tags is about 100 meters.
RFID is simplistic in networking terms. Its function is to identify items within a relatively short range. Two-way communication is pretty limited.
A subset of RFID is a very short distance technology known as near-field communication (NFC). NFC is designed to facilitate information sharing and, in particular, contactless payment. It transmits at 13.56 MHz, which is the same frequency as HF RFID.
The field of mobile contactless payment has made NFC explode in popularity over the last several years. Since Apple's introduction of the iPhone 6 back in 2014, nearly every smartphone manufacturer today equips its phones with NFC. Many tablets have NFC as well. Apple got into the NFC payment arena in 2014 with the launch of Apple Pay, which can be used from iPhones, iPads, and the Apple Watch. In 2015, Google introduced Android Pay, which is an update to the older Google Wallet app. No longer do users need to carry around credit cards—their phone and their fingerprint are all they need to complete a purchase.
NFC uses radio frequency (RF) signals, and NFC devices can operate in three different modes:
Data rates are rather slow compared to other wireless methods, as NFC operates at 106 Kbps, 212 Kbps, and 424 Kbps. NFC always involves an initiator and a target. Let's say that you wanted to read an NFC tag in a poster. You would move your phone close to the tag, and the phone would generate a small RF field that would power the target. Data could then be read from the tag. Tags currently hold up to about 8 KB of data, which is more than enough to store a URL, phone number, or other date and time or contact information.
In peer-to-peer mode, NFC data is transmitted in the NFC Data Exchange Format (NDEF), using the Simple NDEF Exchange Protocol (SNEP). SNEP uses the Layer 2 Logical Link Control Protocol (LLCP), which is connection-based, to provide reliable data delivery.
To use NFC, a user simply moves their device within range (about 10 centimeters or 4") of another NFC-enabled device. Then, using an app, the device will be able to perform the desired transaction, such as making a payment, reading information, or transferring data from one device to another.
NFC uses two different coding mechanisms to send data. At the 106 Kbps speed, it uses a modified Miller coding (delay encoding) scheme, whereas at faster speeds it uses Manchester coding (phase encoding). Neither method is encrypted, so it is possible to hack NFC communications using man-in-the-middle or relay attacks. (We'll go into detail about specific types of attacks in Chapter 17, “Security Concepts.”) Because of the limited distance of the RF signals, though, hacking is pretty hard to do. The potential attacker would need to be within a meter or so to attempt it.
Because of the popularity of mobile payments, it's likely that NFC will be around for quite a few years.
You already know that for computers to talk to each other, they need to be connected in some way. This can be with physical wires or through the air with one of several wireless technologies. The type of connection you choose depends on the purpose of the connection and the needs of the user or users.
Nearly every small office has a network, and it seems like most homes these days have one or more computers that need access to the Internet. As a technician, you may be asked to set up or troubleshoot any number of these types of networks, often collectively referred to as small office, home office (SOHO) networks. This part of the chapter will give you the background you need to feel comfortable that you can get the job done. Most of the principles we talk about apply to larger networks as well, so they're helpful if you're in a corporate environment, too.
Before we get into installation and configuration, though, it's critical to introduce a topic that permeates this whole discussion: planning. Before installing a network or making changes to it, always plan ahead. When planning ahead, consider the user's or company's needs for today and the future. There is no sense in going overboard and recommending a top-of-the-line expensive solution if it's not needed, but if the network is likely to expand, a little up-front planning can save a lot of money and hassle in the long run.
In the following sections, we will look at how to plan and set up a SOHO network. Be advised that most of what we'll discuss from here on out isn't specifically listed as an A+ exam objective. However, we encourage you to read and understand it for two reasons. One, it has tangible real-life implications if you plan on working with computers. Two, even though the subject matter might not be a specific test objective, it will be related to subjects that are. For example, we'll touch on things like wireless channels when talking about setting up routers, how firewalls work, and differences between cable types, all of which are included in the official exam objectives. In other words, think of this next section as a combination of new material, connecting the dots on concepts you've already learned, and real-world application. With that, let's start by planning a network.
Before you run your first cable or place your first wireless router, know exactly where everything is supposed to go on the network. The only way you'll be able to do this is to plan ahead. If you have planned the installation before you begin, the actual physical work of installing the network will be much easier.
Every network is going to be somewhat different. If you are installing a home-based network, the planning is usually pretty simple: figure out where the Internet connection comes in, set up a wireless router, and configure wireless devices, such as laptops, smartphones, and home automation devices, to get on the network. If the network is going to be more complex, however, you should keep the following things in mind as you go through the planning process:
Determine how users will connect. If network users will all connect wirelessly, you can start figuring out how many wireless routers or access points you'll need. The best way to do this is to perform a wireless site survey. The rule of thumb for Wi-Fi 5 and older is no more than 30 users per access point. Wi-Fi 6 can handle more, but you will still cause performance issues if you cram too many people into one access point.
If you are going to have wired connections, start determining how long the cable runs will be. Remember that UTP has a maximum segment distance of 100 meters. If you have to go up from a patch panel, into a ceiling, and down through a wall or conduit, take that into account, too.
While it may be apparent to you, your clients might not realize that in order to get on the Internet, computers need an Internet connection. Internet connections can be broadly broken into two categories: dial-up and broadband. It used to be that you had to weigh the pros and cons and figure out which one was best for your situation. Today, the choice is easy. Go broadband. The only time you would want to use dial-up is if broadband isn't available, and if that's the case, we're sorry!
Your Internet connection will give you online service through an Internet service provider (ISP). The type of service you want will often determine who your ISP choices are. For example, if you want cable Internet, your choices are limited to your local cable companies and a few national providers. We'll outline some of the features of each type of service and discuss why you might or might not recommend a specific connection type based on the situation.
One of the oldest ways of communicating with ISPs and remote networks is through dial-up connections. Even though dial-up Internet is a horribly antiquated service (and not an exam objective), we feel the need to cover it just in case you run into it. The biggest problem with dial-up is limitations on modem speed, which top out at 56 Kbps. Dial-up uses modems that operate over regular phone lines—that is, the plain old telephone service (POTS)—and cannot compare to speeds possible with DSL, cable modems, or even cellular. Reputable sources claim that dial-up Internet connections dropped from 74 percent of all U.S. residential Internet connections in 2000 to 3 percent in 2016. As of 2021, estimates are that about 2 million Americans still use dial-up Internet, which is slightly less than 1 percent of the U.S. population. Most of the people who still use dial-up do it because it's cheaper than broadband or high-speed access isn't available where they live.
The biggest advantage to dial-up is that it's cheap and relatively easy to configure. The only hardware you need is a modem and a phone cable. You dial in to a server (such as an ISP's server), provide a username and a password, and you're on the Internet.
Companies also have the option to grant users dial-up access to their networks. As with Internet connections, this option used to be a lot more popular than it is today. Microsoft offered a server-side product to facilitate this, called the Routing and Remote Access Service (RRAS), as did many other companies. ISPs and Remote Access Service (RAS) servers would use the Data Link layer Point-to-Point Protocol (PPP) to establish and maintain the connection.
It seems that dial-up is a relic from the Stone Age of Internet access. But there are some reasons it might be the right solution:
Of course, there are reasons a dial-up connection might not be appropriate. The big one is speed. If your client needs to download files or has substantial data requirements, dial-up is probably too slow. Forget about video or audio streaming. In addition, with limited bandwidth, it's good only for one computer. It is possible to share a dial-up Internet connection among computers by using software tools, but it's also possible to push a stalled car up a muddy hill. Neither option sounds like much fun. If broadband isn't available in a certain location, satellite is probably a better option than dial-up.
One of the two most popular broadband choices for home use is digital subscriber line (DSL). It uses existing phone lines and provides fairly reliable high-speed access. To utilize DSL, you need a DSL modem (shown in Figure 7.5) and a network card in your computer. The ISP usually provides the DSL modem, but you can also purchase them in a variety of electronics stores. You use an Ethernet cable with an RJ-45 connector to plug your network card into the DSL modem (see Figure 7.6) and the phone cord to plug the DSL modem into the phone outlet. If you need to plug a landline into the same phone jack as your DSL modem, you will need a DSL splitter (such as the one shown in Figure 7.7) and plug the splitter into the wall.
FIGURE 7.5 A DSL modem
FIGURE 7.6 The back of the DSL modem
FIGURE 7.7 A DSL splitter
There are several different forms of DSL, including high bit-rate DSL (HDSL), symmetric DSL (SDSL), very high bit-rate DSL (VDSL), and asymmetric DSL (ADSL). Table 7.5 summarizes the general speeds of each. Keep in mind that the maximum speeds decrease as the installation gets farther away from the phone company's equipment.
Standard | Download speed | Upload speed |
---|---|---|
ADSL | Up to 8 Mbps | Up to 1 Mbps |
SDSL | Up to 2.5 Mbps | Up to 2.5 Mbps |
HDSL | Up to 42 Mbps | Up to 8 Mbps |
VDSL | Up to 52 Mbps | Up to 16 Mbps |
TABLE 7.5 DSL standards and approximate speeds
ADSL was the most popular form of DSL for many years. It's asymmetrical because it supports download speeds that are faster than upload speeds. Dividing up the total available bandwidth this way makes sense because most Internet traffic is downloaded, not uploaded. Imagine a 10-lane highway. If you knew that 8 out of 10 cars that drove the highway went south, wouldn't you make eight lanes southbound and only two lanes northbound? That is essentially what ADSL does.
ADSL and your voice communications can work at the same time over the phone line because they use different frequencies on the same wire. Regular phone communications use frequencies from 0 to 4 kHz, whereas ADSL uses frequencies in the 25.875 kHz to 138 kHz range for upstream traffic and in the 138 kHz to 1,104 kHz range for downstream traffic. Figure 7.8 illustrates this.
FIGURE 7.8 Voice telephone and ADSL frequencies used
The first ADSL standard was approved in 1998 and offered maximum download speeds of 8 Mbps and upload speeds of 1 Mbps. The newest standard (ADSL2+, approved in 2008) supports speeds up to 24 Mbps download and 3.3 Mbps upload. Most ADSL communications are full-duplex.
Many ISPs have moved from ADSL to VDSL, which offers 52 Mbps downloads and 16 Mbps uploads over telephone wires. In practice, service providers will offer many plans with different speeds, starting at about 10 Mbps to 12 Mbps download and 1 Mbps upload. If you want more speed, you will pay more for it. In addition, just because you pay for a certain speed doesn't mean you will get it. The farther away you are from the phone exchange, the slower your speed. Line quality also affects speed, because poorer lines have more attenuation (signal loss).
One major advantage that DSL providers tout is that with DSL you do not share bandwidth with other customers, whereas that may not be true with cable modems.
To summarize, here are some advantages to using DSL:
There are some potential disadvantages as well:
That said, DSL is a popular choice for both small businesses and residential offices. If it's available, it's easy to get the phone company to bundle your service with your landline and bill you at the same time. Often you'll also get a package discount for having multiple services. Most important, you can hook the DSL modem up to your router or wireless router and share the Internet connection among several computers.
With many people using their cell phones as their home phones and landlines slowly fading into history, you may wonder if this causes a problem if you want DSL. Not really. Many phone providers will provide you with DSL without a landline (called naked DSL). Of course, you are going to have to pay a surcharge for the use of the phone lines if you don't already use one.
The other half of the popular home-broadband duet is the cable modem. These provide high-speed Internet access through your cable service, much like DSL does over phone lines. You plug your computer into the cable modem using a standard Ethernet cable, just as you would plug into a DSL modem. The only difference is that the other connection goes into a cable TV jack instead of the phone jack. Cable Internet provides broadband Internet access via a specification known as Data Over Cable Service Interface Specification (DOCSIS). Anyone who can get a cable TV connection should be able to get the service.
As advertised, cable Internet connections are usually faster than DSL connections. You'll see a wide variety of claimed speeds; some cable companies offer packages with download speeds up to 50 Mbps, 100 Mbps, or up to 400 Mbps and various upload speeds as well. If it's that fast, why wouldn't everyone choose it? Although cable generally is faster, a big caveat to these speeds is that they are not guaranteed and they can vary. And again, with many phone companies not really differentiating between DSL and fiber-optic, it can be difficult to understand exactly what you're comparing.
One of the reasons that speeds may vary is that you are sharing available bandwidth within your distribution network. The size of the network varies, but it's usually between 100 and 2,000 customers. Some of them may have cable modems too, and access can be slower during peak usage times. Another reason is that cable companies make liberal use of bandwidth throttling. If you read the fine print on some of their packages that promise the fast speeds, one of the technical details is that they boost your download speed for the first 10 MB or 20 MB of a file transfer, and then they throttle your speed back down to your normal rate.
To see how this could affect everyone's speed on the shared bandwidth, consider a simplified example. Let's say that two users (Sally and John) are sharing a connection that has a maximum capacity of 40 Mbps. For the sake of argument, let's assume that they are the only two users and that they share the bandwidth equally. That would mean normally each person gets 20 Mbps of bandwidth. If Sally gets a boost that allows her to download 30 Mbps, for however long, that leaves John with only 10 Mbps of available bandwidth. If John is used to having 20 Mbps, that 10 Mbps is going to seem awfully slow.
Although it may seem as though we are down on cable modems, you just need to understand exactly what you and your customers are getting. In practice, the speeds of a cable modem are pretty comparable to those of DSL. Both have pros and cons when it comes to reliability and speed of service, but a lot of that varies by service provider and isn't necessarily reflective of the technology. When it comes right down to it, the choice you make between DSL and cable (if both are available in your area) may depend on which company you get the best package deal from: phone and DSL through your telephone company or cable TV and cable modem from your cable provider. The company's reputation for quality and customer service may also play a role.
To summarize, here are the advantages to using cable:
As with anything else, there are possible disadvantages:
Cable modems can be connected directly to a computer but can also be connected to a router or wireless router just like a DSL modem. Therefore, you can share an Internet connection over a cable modem.
Fiber-optic cable is pretty impressive with the speed and bandwidth it delivers. For nearly all of fiber-optic cable's existence, it's been used mostly for high-speed telecommunications and network backbones. This is because it is much more expensive than copper to install and operate. The cables themselves are pricier, as is the hardware at the end of the cables.
Technology follows this inevitable path of getting cheaper the longer it exists, and fiber is really starting to embrace its destiny. Some phone and media companies are now offering fiber-optic Internet connections for home subscribers.
An example of one such option is Fios by Verizon. It offers fiber-to-the-home (FTTH) service, which means that the cables are 100 percent fiber from their data centers to your home. As of this writing, Fios offered basic packages at 200 Mbps download and 200 Mbps upload, and the fastest speeds are 940 Mbps down and 880 Mbps up. Near-gigabit speeds mean you can download a two-hour HD movie in just over one minute. That's ridiculously fast. Other providers will offer similar packages.
Yet another service you may see is called fiber-to-the-curb (FTTC). This runs fiber to the phone or cable company's utility box near the street and then runs copper from there to your house. Maximum speeds for this type of service are around 25 Mbps. These options are probably best suited for small businesses or home offices with significant data requirements, unless online gaming is really important to you.
Connecting to fiber-based Internet requires an optical network terminal (ONT), which we talked about in Chapter 5. From the ONT, you will have a copper network cable running to a router of some sort (say, a wireless router), and then the computers will connect to the router to get to the Internet.
Are there any downsides to a fiber Internet connection? Really only two come to mind. The first is availability. It's still a little spotty on where you can get it. The second is price. That great gigabit connection can easily cost you $200 per month after any special introductory pricing wears off.
Moving on from wired Internet connections, let's talk about wireless
ones. One type of broadband Internet connection that does not get much
fanfare is satellite Internet. Instead of a cabled connection,
it uses a satellite dish to receive data from an orbiting satellite and
relay station that is connected to the Internet. Satellite connections
are typically a little slower than wired broadband connections, with
downloads often maxing out at around 125 Mbps and uploads around 3 Mbps.
To compare plans and prices, visit satelliteinternet.com
.
The need for a satellite dish and the reliance on its technology is one of the major drawbacks to satellite Internet. People who own satellite dishes will tell you that there are occasional problems due to weather and satellite alignment. You must keep the satellite dish aimed precisely at the satellite or your signal strength (and thus your connection reliability and speed) will suffer. Plus, cloudy or stormy days can cause interference with the signal, especially if there are high winds that could blow the satellite dish out of alignment. Receivers are typically small satellite dishes (like the ones used for DirecTV or Dish Network) but can also be portable satellite modems (modems the size of a briefcase) or portable satellite phones.
Another drawback to satellite technology is the delay (also called propagation delay), or latency. The delay occurs because of the length of time required to transmit the data and receive a response via the satellite. This delay (between 250 and 350 milliseconds) comes from the time it takes the data to travel the approximately 35,000 kilometers into space and return. To compare it with other types of broadband signals, cable and DSL have a delay between customer and ISP of 10 to 30 milliseconds. With standard web and email traffic, this delay, while slightly annoying, is acceptable. However, with technologies like VoIP and live Internet gaming, the delay is intolerable.
Of course, satellite also has advantages; otherwise, no one would use it. First, satellite connections are incredibly useful when you are in an area where it's difficult or impossible to run a cable, or if your Internet access needs are mobile and cellular data rates just don't cut it, at least not until you get cellular 5G.
The second advantage is due to the nature of the connection. This type of connection is called point-to-multipoint because one satellite can provide a signal to a number of receivers simultaneously. It's used in a variety of applications, from telecommunications and handheld GPSs to television and radio broadcasts and a host of others.
Here are a few considerations to keep in mind regarding satellite:
It seems that everyone—from kindergarteners to 80-year-old grandparents—has a smartphone today, and of course almost all of them have persistent Internet access. The industry has revolutionized the way we communicate and, some say, contributed to furthering an attention deficit disorder–like, instant gratification–hungry society. In fact, the line between cell phones and computers has blurred significantly with all the smartphones on the market. It used to be that the Internet was reserved for “real” computers, but now anyone can be online at almost any time.
You're probably at least somewhat familiar with cellular Internet—it's what smartphones, many tablets, and older cellular (or cell) phones use. You might not have thought of it in terms of networking, but that's really what it is. There's a central access point, like a hub, which is the cellular network tower. Devices use radio signals to communicate with the tower. The tower, in turn, is connected via wires to a telecommunications backbone, which essentially talks to all the other commercial telecom networks in the world. It's a huge network.
For years, this network was pretty slow, especially by today's standards. The most advanced cell standard when the Internet started becoming a thing was 3G. (There were also 1G and 2G standards before that.) Initially, it had the bandwidth to carry only voice conversations, which consume little bandwidth. Then texting was supported, and after several enhancements it could in theory support downloads of 7 Mbps, although actual data rates varied by carrier, equipment, the number of users connected to the tower, and the distance from the tower. The more current standards are 4G and 5G.
In 2008, the next generation beyond 3G, appropriately named fourth generation (4G), made its appearance. To be specific, 4G refers to a generation of standards for mobile devices (such as phones and tablets) and telecommunication services that fulfill the International Mobile Telecommunications Advanced (IMT-Advanced) specifications as adopted by the International Telecommunication Union (ITU). In more practical terms, it's simply the next-in-line standard for wireless telephone, Internet, video, and mobile TV that replaced 3G. To meet IMT-Advanced standards, the service must provide peak data rates of at least 100 Mbps for high-mobility communication (such as trains or cars) and 1 Gbps for low-mobility communication. One major difference between 4G and 3G is that 4G is designed to use IP instead of traditional telephone circuits. It's designed to provide mobile broadband access.
The first 4G devices that came on the market did not offer anything close to the speeds specified by the ITU. Mobile manufacturers branded them 4G anyway, and there wasn't much the ITU could do to stop it. The result was that the world became inundated with 4G advertising.
In the early days of 4G, there were two competing standards—WiMAX and Long-Term Evolution (LTE). WiMAX was the marketing name given to the IEEE 802.16 standard for wireless MAN technology. While it was initially promising and had higher speeds than its competing standard, LTE was what the mobile providers latched onto. And as already mentioned, they advertised a lot. For years, whenever you turned on the TV, you couldn't help but be bombarded with commercials from cell providers pitching the fastest or widest or whatever-est 4G LTE network.
The biggest enhancement of 4G LTE over 3G is speed. Whereas with true 3G technology you were limited to about 500 Kbps downloads, some 4G LTE networks will give you download speeds of 10–20 Mbps and upload speeds of 3–10 Mbps. (The theoretical maximum for LTE is 300 Mbps download and 75 Mbps upload.) The range of 4G LTE depends on the tower and obstructions in the way. The optimal cell size is about 3.1 miles (5 km) in rural areas, and you can get reasonable performance for about 19 miles (30 km).
New ITU mobile specifications come out about every 10 years, so it stands to reason that we're now on 5G. Even though the first 5G modem was announced in 2016, it took until late 2018 for cellular providers to test-pilot 5G in several cities. Rollout expanded in earnest in 2019, and now it's fairly widespread, though it's not everywhere yet.
The fifth generation (5G) of cellular technology is a massive improvement over 4G—some users will be able to get sustained wireless speeds in excess of 1 Gbps. The theoretical maximum peak download capacity is 20 Gbps, but, of course, that would require pristine conditions, which don't occur in real life.
The technical specifications for 5G divide it into three categories:
Initially the focus has been on eMBB and developing the infrastructure to support mobile devices for consumers. Two versions of eMBB will take hold: fixed wireless broadband in densely populated areas and LTE for everywhere else.
Let's start with LTE, because we've already talked about it some. 5G's version of LTE is similar to 4G LTE, just with faster speeds. It will use existing LTE frequencies in the 600 MHz to 6 GHz range. Browsing speeds for 5G are about seven to nine times faster than 4G (490 Mbps on average), and most users can get 100 Mbps download speeds, compared to 8 Mbps on their 4G LTE network. So, in general, expect 5G LTE to be about seven to ten times faster than a comparable 4G connection.
The really exciting feature of eMBB is fixed wireless broadband. This technology uses millimeter wave bands (called mmWave) in the 24 GHz to 86 GHz range. With mmWave, 5G users should expect gigabit speeds over a wireless connection. This great performance comes with a catch, though. (Doesn't it always?)
Very short radio waves such as the ones used in mmWave can carry a lot of data, but there are two inherent problems:
To overcome the first challenge, transmitters need to be placed very close together. This shouldn't be too much of a problem in urban areas, because 4G transmitters are already packed close together and providers can simply attach a 5G transmitter in the same place.
The second challenge is a bit trickier. Engineers found a way to take advantage of signal bounce—the fact that signals blocked by buildings actually bounce off the buildings—to ultimately bounce the signal to the end user's device. It slows down transmission speeds a bit, but it's still a lot faster than 4G technology. Due to the limitations, however, the fastest 5G connections will be available only in densely populated urban areas.
The promise of 5G performance is intoxicating. With it, users can stream 8K movies over cellular connections with no delays, and it can also enable virtual reality and augmented reality over cell connections. 5G users in urban areas may never need to connect to a Wi-Fi network again, because their cellular connections will be just as fast. Wireless plans may truly become unlimited because of the bandwidth capacity that mmWave creates. For these reasons, some experts have called 5G a revolution, not an evolution.
One final type of wireless Internet connection is that provided by a Wireless Internet service provider (WISP). In very broad terms, a WISP is an ISP that grants access using a wireless technology. Specifically, though, the industry uses the term to refer to providers that offer fixed point-to-point, relatively short-distance broadband Internet.
That differs from the 5G we mentioned in the last section, because while 5G is point-to-point, the receiver (someone's smartphone) is not generally in a fixed location. The receiver for a WISP-based Internet connection will be a fixed receiver, often a small dish or an antenna.
WISPs can operate over unlicensed channels such as 900 MHz, 2.4 GHz, 5 GHZ, 24 GHz, and 60 GHz, or they might offer service in a licensed frequency in the 6 GHz to 80 GHz range. WISP connections require line-of-sight and can be subject to interference and delay. Table 7.6 summarizes the connection types we have discussed in this chapter.
Connection type | Approximate basic package cost | Download speed range | Description |
---|---|---|---|
Dial-up | $10–$20 | Up to 56 Kbps | Plain old telephone service. A regular analog phone line |
DSL | $20–$30 | Up to 50 Mbps | Inexpensive broadband Internet access method with wide availability, using telephone lines. |
Cable | $20–$30 | Up to 100 Mbps | Inexpensive broadband Internet access method with wide availability, using cable television lines. |
Fiber | $40–$50 | Up to 1 Gbps | Incredibly fast and expensive. |
Satellite | $30–$40 | Up to 25 Mbps | Great for rural areas without cabled broadband methods. More expensive than DSL or cable. |
Cellular | $30–$50 | Up to 100 Mbps with 5G LTE or 1 Gbps with mmWave | Great range; supported by cell phone providers. Best for a very limited number of devices. |
WISP | $40–$150 | 6 Mbps to 50 Mbps | Fast connection for rural areas without cabled broadband methods. |
TABLE 7.6 Common Internet connection types and speeds
Along with deciding how your computers will get to the outside world, you need to think about how your computers will communicate with each other on your internal network. The choices you make will depend on the speed you need, distance and security requirements, and cost involved with installation and maintenance. It may also depend some on the abilities of the installer or administrative staff. You may have someone who is quite capable of making replacement Cat 6 cables but for whom making replacement fiber-optic cables is a much more daunting task. Your choices for internal connections can be lumped into two groups: wired and wireless.
Wired connections form the backbone of nearly every network in existence. Even as wireless becomes more popular, the importance of wired connections still remains strong. In general, wired networks are faster and more secure than their wireless counterparts.
When it comes to choosing a wired network connection type, you need to think about speed, distance, and cost. You learned about several types of wired connections in Chapter 5, such as coaxial, UTP, STP, and fiber-optic, but the only two you'll want to go with today are twisted pair and fiber. You'll run one of the two (or maybe a combination of the two), with UTP by far the most common choice, as an Ethernet star network. Table 7.7 shows a summary of the more common Ethernet standards along with the cable used, speed, and maximum distance. The ones you need to know for the exam are Cat 5, Cat 5e, Cat 6, and Cat 6a, but it's good to be familiar with the others as well.
Standard | Cables used | Maximum speed | Maximum distance |
---|---|---|---|
10BaseT | UTP Cat 3 and above | 10 Mbps | 100 meters (∼300 feet) |
100BaseTX | UTP Cat 5 and above | 100 Mbps | 100 meters |
100BaseFX | Multi-mode fiber | 100 Mbps | 2,000 meters |
1000BaseT | UTP Cat 5e and above | 1 Gbps | 100 meters |
10GBaseT | UTP Cat 6 and above | 10 Gbps | 55 meters (Cat 6) or 100 meters (Cat 6a, 7, an 8) |
25GBaseT or 40 GBaseT | UTP Cat 8 | 25 Gbps or 40 Gbps | 30 meters |
10GBaseSR | Multi-mode fiber | 10 Gbps | 300 meters |
10GBaseLR | Single-mode fiber | 10 Gbps | 10 kilometers (6.2 miles) |
10GBaseER | Single-mode fiber | 10 Gbps | 40 kilometers (∼25 miles) |
TABLE 7.7 Common Ethernet standards
The first question you need to ask yourself is, “How fast does this network need to be?” There really is no point installing a 10BaseT network these days because even the slowest wireless LAN speeds can deliver that. For most networks, 100 Mbps is probably sufficient. If the company has higher throughput requirements, then look into Gigabit Ethernet (1 Gbps) or faster (10 Gbps).
The second question is then, “What is the maximum distance I'll need to run any one cable?” In most office environments, you can configure your network in such a way that 100 meters will get you from any connectivity device to the end user. If you need to go longer than that, you'll definitely need fiber for that connection unless you want to mess with repeaters.
As you're thinking about which type of cable to use, also consider the hardware you'll need. If you are going to run fiber to the desktop, you need fiber network cards, routers, and switches. If you are running UTP, you need network cards, routers, and switches with RJ-45 connectors. If you're going to run Gigabit, all devices that you want to run at that speed need to support it.
The third question to ask is, “How big of a deal is security?” Most of the time, the answer lies somewhere between “very” and “extremely.” Copper cable is pretty secure, but it does emit a signal that can be intercepted, meaning people can tap into your transmissions (hence, the term wiretap). Fiber-optic cables are immune to wiretapping. Normally this isn't a big deal, because copper cables don't exactly broadcast your data all over, as a wireless connection does. But if security is of the utmost concern, then fiber is the way to go.
Fourth, “Is there a lot of electrical interference in the area?” Transmissions across a copper cable can be ravaged by the effects of electromagnetic interference (EMI). Fiber is immune to those effects.
Finally, ask yourself about cost. Fiber cables and hardware are more expensive than their copper counterparts. Table 7.8 summarizes your cable choices and provides characteristics of each.
Characteristics | Twisted pair | Fiber-optic |
---|---|---|
Transmission rate | Cat 5: 100 Mbps Cat 5e: 1 Gbps Cat 6/6a and 7: 10 Gbps Cat 8: 25 Gbps or 40 Gbps |
100 Mbps to 10 Gbps |
Maximum length | 100 meters (328 feet) is standard Cat 6 10 Gbps is 55 meters. Cat 8 25 Gbps or 40 Gbps is 30 meters. |
About 25 miles |
Flexibility | Very flexible | Fair |
Ease of installation | Very easy | Difficult |
Connector | RJ-45 (Cat 8.2 uses non-RJ-45 connectors) | Special (SC, ST, and others) |
Interference (security) | Susceptible | Not susceptible |
Overall cost | Inexpensive | Expensive |
NIC cost | 100 Mbps: $15–$40 | $100–$150; easily $600–$800 for server NICs |
10-meter cable cost | Cat 5/5e: $8–$12 Cat 8: Up to $50 |
Depends on mode and connector type, but generally $20–$40 |
8-port switch cost | 100 Mbps: $30–$100 | $300 and up |
TABLE 7.8 Cable types and characteristics
Fiber-optic cabling has some obvious advantages over copper, but, as you can see, it may be prohibitively expensive to run fiber to the desktop. What a lot of organizations will do is use fiber sparingly, where it is needed the most, and then run copper to the desktop. Fiber will be used in the server room and perhaps between floors of a building as well as any place where a very long cable run is needed.
People love wireless networks for one major reason: convenience. Wireless connections enable a sense of freedom in users. They're not stuck to their desks; they can work from anywhere! (We're not sure if this is actually a good thing.) Wireless isn't typically as fast and it tends to be a bit more expensive than wired copper networks, but the convenience factor far outweighs the others.
When you are thinking about using wireless for network communications, the only real technology option available today is IEEE 802.11. The other wireless technologies we discussed earlier in the chapter can help mobile devices communicate, but they aren't designed for full wireless LAN (WLAN) use. Your choice becomes which 802.11 standard you want to use.
So how do you choose which one is right for your situation? You can apply the same thinking you would for a wired network in that you need to consider speed, distance, security, and cost. Generally speaking, though, with wireless it's best to start with the most robust technology and work your way backward.
Security concerns about wireless networks are similar, regardless of your choice. You're broadcasting network signals through air; there will be some security concerns. It comes down to range, speed, and cost.
In today's environment, it's silly to consider 802.11n or older. Deciding that you are going to install an 802.11n network from the ground up at this point is a bit like saying you are going to use 10BaseT. You could try, but why?
That brings us to your most likely choices: 802.11ac (Wi-Fi 5) and 802.11ax (Wi-Fi 6). 802.11ac is plenty fast and will be cheaper, but 802.11ax gives better performance, especially in densely crowded networks. It will come down to cost. Network cards will run you anywhere from $20 to $100, and you can get wireless access points and wireless routers for as little as $20 to $40. Shop around to see what kind of deal you can get. Exercise 7.1 has you do just that.
Once all your plans are complete, you've double-checked them, and they've been approved by the client or boss, you can begin physical installation. As we've said before (but can't overstate), having good plans up front saves time and money. The approval process is critical too, so the people in charge are informed and agree to the plans. Here we'll look at installation of three groups of items: network cards, cables, and connectivity devices.
Before you can begin communicating on your network, you must have a NIC installed in the device. External USB NICs are super easy to install—you literally plug it in and it will install its driver and be ready to go. Installing an internal NIC is a fairly simple task if you have installed any expansion card before; a NIC is just a special type of expansion card. In Exercise 7.2, you will learn how to install an internal NIC.
Now that your NIC is installed, it's time to configure it with the right IP address and TCP/IP configuration information. There are two ways to do this. The first is to automatically obtain IP configuration information from a Dynamic Host Configuration Protocol (DHCP) server, if one is available on the network. This is called dynamic configuration. The other way is to manually enter in the configuration information. This is called static configuration.
To configure your NIC in Windows 10, open Control Panel and view by small icons or large icons. Click Network and Sharing Center to see basic network information, as shown in Figure 7.11. From here, there are a few ways you can get to the TCP/IP settings. The first is to click the network name link to the right of Connections. That will open a network status window. Click the Properties button to see the properties, as shown in Figure 7.12. The second way is to click Change Adapter Settings in the left pane of the Network and Sharing Center. You'll see the name of a connection, such as Local Area Connection. Right-click that and then click Properties. This will take you to the screen shown in Figure 7.12.
FIGURE 7.11 Network and Sharing Center
On that screen, highlight Internet Protocol Version 4 (TCP/IPv4) and click Properties. This will take you to a screen similar to the one shown in Figure 7.13.
FIGURE 7.12 Wi-Fi Properties
FIGURE 7.13 TCP/IP Properties
As you can see in Figure 7.13, this computer is configured to obtain its information automatically from a DHCP server. (If you have a wireless router, as many people do on their home networks, it can function as a DHCP server. We'll talk more about that in a few sections.) If you wanted to configure the client manually, you would click Use The Following IP Address and enter the correct information. To supply the client with a DNS server address manually, click Use The Following DNS Server Addresses.
Installing an internal NIC is the same whether the card is wired or wireless. Of course, the big difference is how the card connects to the network. A wireless card needs to know the name of the wireless network, or the Security Set Identifier (SSID).
To configure a wireless connection, you can simply bring a Windows (XP or newer) laptop or computer within range of a wireless access point, and Windows will detect and alert you to the presence of the access point. Alternatively, if you would like control over the connection, in Windows 10, you can choose Start ➢ Settings ➢ Network & Internet to bring up the Network Status screen, as shown in Figure 7.14. Click Show Available Networks, and you will get a screen similar to the one shown in Figure 7.15.
FIGURE 7.14 Network Status
FIGURE 7.15 Available wireless connections
From this screen, you can view the SSIDs of the available wireless networks, including the one to which you are connected (the one that says “Connected” next to it). The icon to the left of the network name indicates the relative signal strength of each connection. Stronger (and faster) connections will have more bars.
To connect to any network, click it and choose the Connect button, and Windows will try to connect. Once you are connected, Windows will display “Connected” next to that connection.
Network cables are not the most fun things to install. Proper installation of network cables generally means running them through ceilings and walls and making a mess of the office. Thank goodness for wireless!
If you are installing a wired network in an existing office space, you may want to look into hiring out the cable installation to a third party. Many companies have the tools to properly install a wired network.
When installing a wired network yourself, always be aware of the maximum cable lengths, as outlined in Table 7.7. In addition, utilize cable troughs in ceilings and walls or another conduit in walls to keep your cables organized. Figure 7.16 shows a cable trough; they come in a variety of lengths and quality.
Finally, if you must run cables across the floor in a walkway (which isn't recommended), use a floor cable guard to avoid creating a trip hazard and to protect your cables. A floor cable guard is shown in Figure 7.17.
FIGURE 7.16 Cable trough
FIGURE 7.17 Floor cable guard
In this network installation section, we started with the local computer (the NIC) and then moved on to cabling, which, of course, is needed if the network is wired. Continuing our trek away from the local computer, we now need to look at devices that help you connect to other computers. Broadly speaking, we can break these connectivity devices into two categories: those that make connections to the Internet and those that make connections to other local computers. In the first category, we have DSL and cable modems; in the second category, we have switches, wireless routers, and wireless access points.
We covered the basic installation of DSL and cable Internet service earlier in the chapter, but since we're talking about network installation, now is a good time to review.
To access the outside world, the DSL modem (again, remember it's not really a modem, but that's the colloquial term) connects to a telephone line. The cable modem connects to the outside world through cable television lines. The ISP manages the connection for you. Internally, the DSL or cable modem can be connected directly to a computer using a UTP cable, or it can connect to a switch, router, or wireless connectivity device so that multiple users can share the connection.
For the most part, there is little to no configuration you can perform on a DSL or cable modem. The ISP must initiate the connection to the Internet from their end. Sometimes they need to send a technician to your location to enable it, but other times they just ship the device and you plug it in. Don't forget to plug in the power cord! Beyond that, most ISPs don't want you touching any settings on the modem. If something is wrong, you need to reach out to their tech support people for assistance.
Some DSL and cable modems have built-in wireless router capabilities. If so, it's possible that the ISP will want to charge you more per month for that feature. If the modem is so enabled, you may be able to configure basic settings through a web-based interface. Configuring one of these is very similar to configuring a stand-alone wireless router, which we'll cover in detail later in this chapter.
Wired switches and hubs are fairly easy to install. Plug in the power, and plug in the network cables for client computers. Hubs don't typically have configuration options—they just pass signals along. Unmanaged switches are similar to hubs in the sense that there's not much to configure. Managed switches will have configuration settings for VLANs and other options. For more information on the services that managed switches can provide, refer to Chapter 5.
Instead of using switches and hubs, wireless networks use either a wireless access point (WAP) or a wireless router to provide central connectivity. A WAP functions essentially like a wireless hub, whereas a wireless router provides more functionality, similar to that of a wired router. Based on looks alone, they are pretty much identical, and physically installing them is similar. The differences come in configuring them because they will have different options.
We're going to talk about installing and configuring WAPs and wireless routers interchangeably; just remember that a lot of the features available in a wireless router may not be available in a WAP.
After unwrapping the device from its packaging (and reading the instructions, of course), you must choose a place for it. If it is supplying wireless access to your home network and the Internet, locate it where you can receive access in the most places. Keep in mind that the more walls the signal has to travel through, the lower the signal strength.
In addition, you may choose to have some computers plug directly into the device using a UTP cable. If so, it makes sense to locate the device near the computer or computers you will want to physically connect.
In many offices, WAPs and wireless routers are often placed in the ceiling, with the antennae pointed downward through holes in the ceiling tiles. You can purchase metal plates designed to replace ceiling tiles to hold these devices. The plates have holes precut in them for the antennae to stick through, are designed to securely hold the device and easily open for maintenance, and often lock for physical security. You can also purchase Wi-Fi ceiling antennas that basically look like a little dome hanging from the ceiling.
Once you have chosen the location, plug the unit into a wall outlet and connect the two antennae that come with the unit (as needed; many newer devices contain built-in antennae). They will screw onto two bungs on the back of the unit. Once the unit is plugged in, you need to connect it to the rest of your network.
If you are connecting directly to the Internet through a cable modem or DSL or to a wired hub or router, you will most likely plug the cable into the Internet socket of the device, provided that it has one. If not, you can use any of the other wired ports on the back of the device to connect to the rest of your network. Make sure that you get a link light on that connection.
At this point, the device is configured for a home network, with a few basic caveats. First, the default SSID (for example, Linksys) will be used, along with the default administrative password and the default IP addressing scheme. Also, there will be no encryption on the connection. This is known as an open access point. Even if you have nothing to protect except for the Internet connection, you shouldn't just leave encryption turned off. It makes you an easy and inviting target for neighbors who want to siphon off your bandwidth or even worse. Many wireless manufacturers have made their devices so easy to configure that for most networks it is Plug and Play.
From a computer on the home network, insert the device's setup media (flash drive or optical media) into the computer. It will automatically start and present you with a wizard that will walk you through setting the name of the SSID of this new access point as well as changing the default setup password, setting any security keys for this connection, and generally configuring the unit for your network's specific configuration. Then you're done!
Each wireless manufacturer uses different software, but you can usually configure their parameters with the built-in, web-based configuration utility that's included with the product. While the software is convenient, you still need to know which options to configure and how those configurations will affect users on your networks. The items that require configuration depend on the choices you make about your wireless network. We will divide the configuration section into two parts: basic configuration and security options, which apply to both routers and access points, and then additional services that are normally router-only.
The Wi-Fi Alliance (www.wi-fi.org
) is the
authoritative expert in the field of wireless LANs. It lists five
critical steps to setting up a secured wireless router:
The parameter that needs immediate attention is the SSID. An SSID is a unique name given to the wireless network. All hardware that is to participate on the network must be configured to use the same SSID. Essentially, the SSID is the network name. When you are using Windows to connect to a wireless network, all available wireless networks will be listed by their SSID when you click Show Available Networks.
When you first install the wireless network, the default SSID is used and no security is enabled. In other words, it's pretty easy to find your network (Linksys), and anyone within range of your signal can get on your network with no password required. This is obviously a security risk, so you want to change that.
For the rest of this example, we'll use a Linksys MR9000 wireless
router. First, you need to log into your device. The default internal
address of this router is 192.168.1.1, so to log in, open Microsoft Edge
(or your preferred Internet browser) and type
192.168.1.1
into the
address bar. (Some routers use 192.168.0.1 as a default; check your
router's documentation if you are unsure about what your router uses.)
You'll get a screen similar to the one shown in Figure
7.18.
FIGURE 7.18 Logging into the wireless router
You should have already set up the username and password using the installation media provided with the device. If not, look in the documentation for the default username and password. You'll definitely want to change these as soon as possible. Once you're logged in, the first screen you'll see is similar to the one shown in Figure 7.19. You can see sections along the left side that allow you to configure various router settings. On this router, the Connectivity section has an Internet Settings tab that identifies how you configure your incoming connection from the ISP. In most cases, your cable or DSL provider will just have you use DHCP to get an external IP address from its DHCP server, but there are options to configure this manually as well.
FIGURE 7.19 Basic setup screen
Next, configure the parameters that are crucial for operation according to the Wi-Fi Alliance. On this router, the admin password is configured on the Basic tab of the Connectivity settings, as shown in Figure 7.20.
FIGURE 7.20 Basic wireless settings tab
The network name (SSID) as well as the password required by clients to join the network is on the Wi-Fi Settings tab, shown in Figure 7.21. (We blocked out the password for pretty obvious reasons, because this router screen shows it in plain text.) You can change either of these parameters by editing the text in the boxes. Make sure the passwords to join are very different from the administrator password! These steps take care of the SSID, admin password, and security phrase.
FIGURE 7.21 Wi-Fi settings
Let's pop back to Connectivity for a minute to configure the internal network settings on the Local Network tab, as shown in Figure 7.22.
Here, you configure your router's hostname, internal IP address (in this case, 192.168.1.1), and subnet mask. On this router, DHCP is also configured on this screen. If you want the device to act as a DHCP server for internal clients, enable it here, specify the starting IP address, and specify the maximum number of DHCP users. Disabling DHCP means that clients will have to use a static IP address.
FIGURE 7.22 Local Network Settings screen
The last critical setting you need to make is to enable wireless encryption. If you don't do this, all signals sent from the wireless router to client computers will be in plain text and anyone can join the network without a security password. It's a really bad thing to leave disabled. Before we look at how to set up encryption on our wireless router, let's cover the details on three encryption options.
The protocol has always been under scrutiny for not being as secure as initially intended. WEP is vulnerable due to the nature of static keys and weaknesses in the encryption algorithms. These weaknesses allow the algorithm to potentially be cracked in a very short amount of time—no more than two or three minutes. This makes WEP one of the more vulnerable protocols available for security.
Because of security weaknesses and the availability of newer protocols, WEP should not be used widely. You will likely see it as the default security setting on many routers, even with all its shortcomings. It's still better than nothing, though, and it does an adequate job of keeping casual snoops at bay.
Wi-Fi Protected Access Wi-Fi Protected Access (WPA) is an improvement on WEP that was first available in 1999 but did not see widespread acceptance until around 2003. Once it became widely available, the Wi-Fi Alliance recommended that networks no longer use WEP in favor of WPA.
This standard was the first to implement some of the features defined in the IEEE 802.11i security specification. Most notably among them was the use of the Temporal Key Integrity Protocol (TKIP). Whereas WEP used a static 40- or 128-bit key, TKIP uses a 128-bit dynamic per-packet key. It generates a new key for each packet sent. WPA also introduced message integrity checking.
When WPA was introduced to the market, it was intended to be a temporary solution to wireless security. The provisions of 802.11i had already been drafted, and a standard that employed all the security recommendations was in development. The upgraded standard would eventually be known as WPA2.
Wi-Fi Protected Access 3 The newest and strongest wireless encryption is Wi-Fi Protected Access 3 (WPA3). It was released in 2018 and is mandatory for all new Wi-Fi–certified devices as of July 2020. It offers more robust authentication and increased cryptographic strength to ensure more secure networks.
WPA3-Personal offers increased security even if users choose passwords that don't meet normal complexity requirements through a technology called Simultaneous Authentication of Equals (SAE). SAE is resistant to dictionary attacks, which are brute-force methods of guessing passwords.
WPA3-Enterprise has a plethora of new security features, including multiple authentication methods, enhanced encryption with 128-bit Advanced Encryption Standard Counter Mode with Cipher Block Chaining Message Authentication (AES-CCMP 128), more robust 256-bit security key derivation and confirmation, and 128-bit frame protection. Finally, there's also a 192-bit WPA-3 mode that increases the strength of all features already mentioned. It's way more than you will need to know for the A+ exam (and WPA3 isn't yet an objective), but if cybersecurity is of interest to you, check it out.
Since 2006, wireless devices have been required to support WPA2 to be certified as Wi-Fi compliant. Of the wireless security options available today, it provides the strongest encryption and data protection.
On this particular router, it would make sense to configure security via the Security section, but that's not true. (Who says things need to be logical?) On this particular router, the encryption method is selected on the same page the SSID and network password are configured. It's in the Wi-Fi settings section, Wireless tab that was shown in Figure 7.21.
This router happens to be 802.11ac, so it has sections for both a 2.4 GHz and a 5 GHz network. If there were only devices of one type, it would make sense to disable the other network. In this case, though, we are talking about security, and you can see that it's set to WPA2-Personal. To change the setting, click the down arrow next to Security mode. The other WPA2 choice you generally have is WPA2-Enterprise, which is more secure than Personal. For a business network, regardless of the size, Enterprise is the way to go. In order to use Enterprise, though, you need a separate security server called a RADIUS server.
With that, the basic router-side setup recommendations have been taken care of. Now it's just a matter of setting up the clients with the same security method and entering the passphrase.
Earlier in the chapter, in the section on 802.11g, we brought up the concept of wireless channels. There are 11 configurable channels in the 2.4 GHz range, which is what 802.11b/g uses to communicate. Most of the time, channel configuration is automatically done, so you won't need to change that.
However, let's say you have too many users for one WAP to adequately service (about 30 or more for Wi-Fi 5 or older routers) or your physical layout is too large and you need multiple access points. Now you need to have more than one access point. In a situation like this, here's how you should configure it:
Set up the WAPs so that they have overlapping ranges.
The minimum overlap is 10 percent, and 20 percent is recommended. This way, if users roam from one area to another, they won't lose their signal.
2.4 GHz channels need to be at least five numbers apart to not overlap. So, for example, channels 2 and 7 do not overlap, nor do channels 4 and 10. There are 11 configurable channels, so you can have a maximum of three overlapping ranges on the same SSID, configured with channels 1, 6, and 11, and not have any interference. Wireless clients are configured to auto-detect a channel by default, but they can be forced to use a specific channel as well.
Some wireless routers allow you to configure the channels used. For example, Figure 7.24 shows a wireless router configuration page where you can configure the 5 GHz network. In this case, you can choose from 20 MHz or 40 MHz channel widths, as well as choose the channel. Each of the 20 MHz channels shown is nonoverlapping.
FIGURE 7.24 5 GHz channels available to select
Wireless routers offer many more services than access points offer. Most of those services fall under the umbrella of configuring your router as a firewall. Firewalls are an important networking concept to understand, so we'll first give you a primer on what firewalls are and what they do, and then we'll look at specific configuration options.
Before we get into configuring your wireless router as a firewall, let's be sure you know what firewalls can do for you. A firewall is a hardware or software solution that serves as your network's security guard. For networks that are connected to the Internet, firewalls are probably the most important device on the network. Firewalls can protect you in two ways. They protect your network resources from hackers lurking in the dark corners of the Internet, and they can simultaneously prevent computers on your network from accessing undesirable content on the Internet. At a basic level, firewalls filter packets based on rules defined by the network administrator.
Firewalls can be stand-alone “black boxes,” software installed on a server or router, or some combination of hardware and software. Most firewalls have at least two network connections: one to the Internet, or public side, and one to the internal network, or private side. Some firewalls have a third network port for a second semi-internal network. This port is used to connect servers that can be considered both public and private, such as web and email servers. This intermediary network is known as a screened subnet, formerly called demilitarized zone (DMZ), an example of which is shown in Figure 7.25. Personal software-based firewalls will run on computers with only one NIC.
FIGURE 7.25 A network with a demilitarized zone (DMZ)
We've already stated that firewalls can be software- or hardware-based or a combination of both. Keeping that in mind, there are two general categories of firewalls: network-based and host-based.
Firewalls are configured to allow only packets that pass specific security restrictions to get through them. They can also permit, deny, encrypt, decrypt, and proxy all traffic that flows through them, most commonly between the public and private parts of a network. The network administrator decides on and sets up the rules a firewall follows when deciding to forward data packets or reject them.
The default configuration of a firewall is generally default deny, which means that all traffic is blocked unless specifically authorized by the administrator. While this is very secure, it's also time consuming to configure the device to allow legitimate traffic to flow through it. The other option is default allow, which means all traffic is allowed through unless the administrator denies it. If you have a default allow firewall and don't configure it, you might as well not have a firewall at all.
The basic method of configuring firewalls is to use an access control list (ACL). The ACL is the set of rules that determines which traffic gets through the firewall and which traffic is blocked. ACLs are typically configured to block traffic by IP address, port number, domain name, or some combination of all three.
Packets that meet the criteria in the ACL are passed through the firewall to their destination. For example, let's say you have a computer on your internal network that is set up as a web server. To allow Internet clients to access the system, you need to allow data on port 80 (HTTP) and 443 (HTTPS) to get to that computer.
Another concept you need to understand is port triggering. It allows traffic to enter the network on a specific port after a computer makes an outbound request on that specific port. For example, if a computer on your internal network makes an outbound RDP request (port 3389), subsequent inbound traffic destined for the originating computer on port 3389 would be allowed through.
Nearly every wireless router sold today provides you with some level of firewall protection. On the router used in this example, the firewall options are on two separate tabs. Enabling the firewall and setting a few basic options is done from the Security section, as shown in Figure 7.26. More advanced options, such as configuring port forwarding and port triggering, are on the DMZ and Apps and Gaming tabs. (Remember that DMZs are also called screened subnets.)
FIGURE 7.26 Enabling the firewall
Network Address Translation (NAT) is a very cool service that translates a private IP address on your internal network to a public IP address on the Internet. If you are using your wireless router to allow one or more clients to access the Internet but you have only one external public IP address, your router is using NAT.
Most routers have NAT enabled by default, and there might not be any specific configuration options for it. That's true in the case of the router we've been using as an example. You can enable or disable it on the Advanced Routing tab in Connectivity, but otherwise the only options you can configure are the internal IP addresses that the router hands out to clients.
Another type of NAT is called Dynamic Network Address Translation (DNAT), which translates a group of private addresses to a pool of routable addresses. This is used to make a resource that's on a private network available for consumption on public networks by appearing to give it a publicly available address. For example, if a web server were behind a NAT-enabled router and did not have its own public IP address, it would be inaccessible to the Internet. DNAT can make it accessible.
Universal Plug and Play (UPnP) is a standard designed to simplify the process of connecting devices to a network and enable those devices to automatically announce their presence to other devices on the network. If you remember when Plug and Play was new to computers, it was revolutionary. You simply plugged in a peripheral (such as a USB network card or mouse) and it was detected automatically and it worked. UPnP is the same idea, but for networking. From a standards standpoint, there's not a lot to configure. The client needs to be a DHCP client and the service uses UDP port 1900.
The concept is great. It lets devices connect to the network and discover each other automatically with the Simple Service Discovery Protocol. It can be used for any networked device you can think of, from routers and printers to smartphones and security cameras.
The problem is, UPnP has no authentication mechanism. Any device or user is trusted and can join the network. That is obviously a problem. The security consulting firm Rapid7 did a six-month research study in early 2013 and found that over 6,900 network-aware products, made by 1,500 different companies, responded to public UPnP requests. In total, they found nearly 81 million individual devices responded to requests. The U.S. Department of Homeland Security and many others immediately began requesting people to disable UPnP.
Since that time, the UPnP forum (www.openconnectivity.org
)
has released statements saying that the security holes have been patched
and that the system is more secure than ever. Even though the issues
were discovered several years ago, as of this writing, skeptics still
abound and UPnP does not appear to be a safe option. Regardless of if
and when it gets fixed, the reputation of UPnP is not a good one.
The biggest risk is for open UPnP connections to be exploited by unknown systems on the Internet. Therefore, you should configure your router to not allow UPnP connections from its external connection. Many ISPs have also taken steps to help prevent issues. In summary, the best bet is to leave it disabled.
In this chapter, you learned about wireless networking and configuring a small office, home office (SOHO) network. We started with wireless networking. We introduced the key wireless networking standards 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac (Wi-Fi 5), and 802.11ax (Wi-Fi 6), and talked about their characteristics, such as speed, distances, frequencies, channels, and modulation. Then we moved on to Bluetooth networking, long-range fixed wireless, and, finally, RFID and NFC.
Next, you learned the fundamentals of installing a small network. We started by looking at network planning, which is the critical first step. Don't be the one who forgets that!
Then, we covered the myriad possibilities for Internet connections, from the archaic dial-up to wired broadband options, such as DSL, cable modems, and fiber-optic, and wireless choices: satellite, cellular, and a wireless Internet service provider (WISP). Then we talked about choosing internal network connections in both wired and wireless environments.
From there, we dove into installing network infrastructure. If you did a good job planning, this part should be problem-free. We covered installing and configuring NICs (including IP addressing for clients), cables, and connectivity devices.
Finally, we looked at how to configure a router. The Wi-Fi Alliance has some great practical steps on how to configure a secure wireless network, such as changing the SSID, setting passwords, and enabling encryption, such as WEP, WPA, WPA2, and WPA3. We also looked at other basic configuration options, such as DHCP and communication channels. Then we looked at your wireless router as a firewall, including NAT and UPnP.
Know the different 802.11 standards. Standards you should be familiar with are 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac (Wi-Fi 5), and 802.11ax (Wi-Fi 6). Know the frequencies (2.4 GHz and 5 GHz) that each one uses, as well as performance characteristics of each, such as relative distance and speed.
Know the three nonoverlapping 2.4 GHz wireless channels. If you need three nonoverlapping channels, you must use channels 1, 6, and 11.
Understand Bluetooth networking. Bluetooth isn't used for wireless LAN like 802.11 is but for small personal area networks. It's best for peripheral connectivity such as headsets, keyboards, mice, and printers.
Understand how long-range fixed wireless works. Long-range fixed wireless is a point-to-point wireless connection. It can operate over licensed or unlicensed channels. It can also transmit power through the air, and there are regulatory requirements for doing so.
Understand the difference between RFID and NFC. Both use radio signals. RFID is used to track the presence or location of items. NFC uses high-frequency RFID signals and can be used for touchless payment systems.
Know the different types of available broadband connections. Broadband connections include DSL, cable, fiber, satellite, cellular, and wireless Internet service provider (WISP).
Know the various cellular networking standards. Understand the differences between 4G and 5G, and also how 5G has LTE and mmWave.
Know how to configure a network client to use IP addressing. Clients can be configured with static IP information, or dynamically through a DHCP server. To communicate on the Internet, clients need an IP address, subnet mask, and gateway. On an internal network, you can use private IP addresses as long as they go through NAT to get to the Internet. APIPA is available as a fallback if a DHCP server is not available.
Understand the three encryption protocols used for wireless networking. Listed in order from least to most secure, the common wireless security protocols include WEP, WPA, and WPA2. WPA uses TKIP, and WPA2 uses AES. (WPA3 is the most secure, but it's not yet an exam objective.)
Know the different types of available broadband connections. Broadband connections include DSL, cable, satellite, ISDN, cellular, and fiber-optic.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exams require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
You just purchased a new PCIe network card for a Windows 10 desktop computer. How would you install it?
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Networks are often complicated structures. When users get on a network, they have expectations that certain services will be delivered, and most of the time they are unaware of the underlying infrastructure. As long as what they want gets delivered, they are content. In client-server networks, which you learned about in Chapter 5, “Networking Fundamentals,” there are one or more servers that play unique roles in fulfilling client requests.
The traditional delivery method for services has been that the servers are on the same network as the clients. They might not be on the same LAN, but they are certainly administered by one company or one set of administrators. If clients on the network need a new feature, the network architects and administrators add the necessary server. This is still the most common setup today, but there's been a sharp growth in cloud computing and virtualization in the last several years. In essence, cloud computing lets networks break out of that model and have services provided by a server that the company doesn't own, and so it's not under the company's direct control. Virtualization is an important technology in cloud computing because it removes the barrier of needing one-to-one relationships between the physical computer and the operating system.
In this chapter, we will talk about some key network services you need to be familiar with as a technician. Servers provide some services, and stand-alone security devices or Internet appliances provide others. After that, we will dive into the world of cloud computing and virtualization, because it's a hot topic that is becoming increasingly important.
As you learned in Chapter 5, networks are made up of multiple types of devices operating together. There are different types of networks, such as peer-to-peer and client-server, and they are categorized based on the types of devices they support. Very simple, small networks can be peer-to-peer, where there are no dedicated servers. However, most networks that you encounter in the business world will have at least one server, and enterprise networks can easily have hundreds or even thousands of servers.
New computer technicians won't be expected to manage large server farms by themselves, but they should be aware of the types of servers and other devices that will be on the network and their basic functions. Experienced technicians may be in charge of one or more servers, and they will need to be intimately familiar with their inner workings. As you gain more experience, you will find that there are advanced certifications in the market to prove your knowledge of servers and show off your skills to potential employers. For the A+ exam, you will need to know various server roles, the features of a few Internet security appliances, the impact of legacy and embedded systems, and the features of Internet of Things (IoT) devices.
Servers come in many shapes and sizes, and they can perform many different roles on a network. Servers are generally named for the type of service they provide, such as a web server or a print server. They help improve network security and ease administration by centralizing control of resources and security; without servers, every user would need to manage their own security and resource sharing. Not everyone has the technical ability to do that, and even if they did, those types of responsibilities might not be part of what they are being asked to deliver at work. Servers can also provide features such as load balancing and increased reliability.
Some servers are dedicated to a specific task, such as hosting websites, and they are called dedicated servers. Nondedicated servers may perform multiple tasks, such as hosting a website and serving as the administrator's daily workstation. Situations like this are often not ideal because the system needs more resources to support everything it needs to do. Imagine that you are the user of that computer and there is heavy website traffic. Your system could slow down to the point where it's difficult to get anything done. In addition, that kind of setup could introduce additional security risks. Servers can also perform multiple server-specific roles at the same time, such as hosting websites and providing file and print services. As you read through the descriptions of server roles, you will see that it makes more sense to combine some services than it does to combine others.
One important decision network architects need to make when thinking about designing a network is where to place the server or servers. In Chapter 7, “Wireless and SOHO Networks,” we introduced the concept of a screened subnet (formerly called a demilitarized zone [DMZ]), which is a network separated from the internal network by a firewall but also protected from the Internet by a firewall. Figure 8.1 shows two examples of screened subnets.
FIGURE 8.1 Screened subnets
In Figure 8.1, you see that the web and mail servers are in the screened subnet and not on the internal network. This configuration can make it easier to manage the network but still provide great security. As a rule of thumb, any server that needs to be accessed by the outside world should be in the screened subnet, and any server that does not need to be accessed from the Internet should be on the internal network, which is more secure. By the way, servers can play the role of firewalls, too. It's not, however, on the list of objectives as a server role, and in practice it's best to separate other server roles from firewalls. In other words, if you intend to use a server as a firewall, then don't use it for any other types of services. Having services on the firewall itself just makes it easier for hackers to get to. There's no sense in making things easier for them. Now it's time to talk about specific server roles on a network.
We discussed Domain Name System (DNS) servers in Chapter 6, “Introduction to TCP/IP,” so we won't go into a lot of depth here. Instead, we'll provide a quick review:
www.google.com
to
72.14.205.104 so that communication can begin.DNS servers for intranet use only can be located on the internal network (inside the network firewalls). If the DNS server is being used for Internet name resolution, it's most effective to place it in the screened subnet. DNS uses UDP or TCP port 53.
Dynamic Host Configuration Protocol (DHCP) servers were also covered in Chapter 6, so there's no sense in repeating all the same material. A review is more appropriate:
DHCP servers should be located on the internal network. If the network has clients that are connecting via remote access, then a device with DHCP capabilities (such as the Remote Access Service [RAS]) can be placed in the screened subnet. DHCP uses UDP ports 67 and 68.
A fileshare or file server provides a central repository for users to store, manage, and access files on the network. There are a few distinct advantages to using file servers:
Fileshares come in a variety of shapes and sizes. Some are as basic as Windows-, macOS-, or Linux-based servers with a large amount of internal hard disk storage space. Networks can also use network-attached storage (NAS) devices, which are stand-alone units that contain hard drives, come with their own file management software, and connect directly to the network. If a company has extravagant data storage needs, it can implement a storage area network (SAN). A SAN is basically a network segment, or collection of servers, that exists solely to store and manage data.
Since the point of a fileshare is to store data, it's pretty important to ensure that it has ample disk space. Some dedicated file servers also have banks of multiple optical drives for extra storage (letting users access files from optical media) or for performing backups. Processing power and network bandwidth can also be important to manage file requests and deliver them in a timely manner.
As far as location goes, fileshares will almost always be on the internal network. You might have situations where a fileshare is also an FTP server, in which case the server should be on the screened subnet. In those cases, however, you should ensure that the server does not contain highly sensitive information or other data that you don't want to lose.
Print servers are much like file servers, except, of course, they make printers available to users. In fact, file servers and print servers are combined so often that you will see a lot of publications or tools refer to file and print servers as if they were their own category.
On its own, a print server makes printers available to clients over the network and accepts print requests from those clients. A print server can be a physical server like a Windows- or Linux-based server, a small stand-alone device attached to a printer (or several printers), or even a server built into the printer itself. Print servers handle the following important functions:
Figure 8.2 shows a simple stand-alone print server. It has an RJ-45 network connection and four USB ports to connect printers. Wireless print servers are easy to find as well.
FIGURE 8.2 A D-Link print server
Although the specific functionality will vary by print server, most of the time administrators will be able to manage security, time restrictions, and other options, including if the server processes the files and if the print jobs are saved after printing. An example is shown in Figure 8.3. Print servers should be located on the internal network.
FIGURE 8.3 Printer management options
Email is critical for communication, and mail servers are responsible for sending, receiving, and managing email. To be a mail server, the computer must be running a specialized email server package. Some popular ones are Microsoft Exchange, Sendmail, Postfix, and Exim, although there are dozens of others on the market.
Clients access the mail server by using an email client installed on their systems. The most common corporate email client is Microsoft Outlook, but Apple Mail, HCL Notes (formerly IBM Notes and Lotus Notes), Gmail, and Thunderbird are also used. Mobile and Internet email clients (which are more popular than their corporate cousins) include the iPhone, iPad, and Android email clients; Gmail; Outlook, Apple Mail, and Yahoo! Mail.
In addition to sending and receiving email, mail servers often have antispam software built into them as well as the ability to encrypt and decrypt messages. Email servers are most often located in the screened subnet. Table 8.1 lists the most important protocols for sending and receiving email.
Protocol | Port | Purpose |
---|---|---|
SMTP | 25 | Sending email and transferring email between mail servers. |
POP3 | 110 | Receiving email. |
IMAP4 | 143 | Receiving email. It's newer and has more features than POP3. |
TABLE 8.1 Important email protocols
Network administrators need to know what's happening on their network at all times. The challenge is that there may be hundreds or thousands of devices on the network, with thousands of users accessing resources locally and remotely. Keeping track of who is logging in where, what resources users are accessing, who is visiting the web server, the status of the router, the printer's online status, and innumerable other events could be an administrative nightmare. Fortunately, syslog is available to help manage it all.
Syslog works as a client-server model, where the clients generate messages based on the triggering of certain conditions, such as a login event or an error with a device, and send them to a centralized logging server, also known as the syslog server. Syslog uses UDP port 514 by default. Consequently, the term syslog can be applied to a standard or system for event monitoring, the protocol, or the actual server that collects the logged messages.
Syslog got its start in the UNIX world and is used extensively with Linux-based networking systems and devices. Microsoft operating systems don't natively support syslog—Windows comes with its own event logger called Event Viewer, which we cover in Chapter 15, “Windows 10 Administration”—but it's easy to find packages that let Windows servers participate in a syslog environment. Let's take a look at clients and servers in a syslog system.
Many different types of devices, such as servers, routers, and printers, support syslog as a client across a wide variety of operating systems. The primary job of the client (in syslog terms) is to send a message to the syslog server if certain conditions are met. For example, an authentication server can send a message whenever there is a successful or failed login attempt, or a router can send the status of its used or available bandwidth.
Messages have the following three components:
Level | Severity | Description |
---|---|---|
0 | Emergency | A panic condition when the system is unusable |
1 | Alert | Immediate action needed |
2 | Critical | Major errors in the system |
3 | Error | “Normal” error conditions |
4 | Warning | Warning conditions, usually not as urgent as errors |
5 | Notice | Normal operation but a condition has been met |
6 | Information | Provides general information |
7 | Debug | Information used to help debug programs |
TABLE 8.2 Syslog severity levels
The syslog server's job is to collect and store messages. Most syslog servers are made up of three components: the listener, a database, and management and filtering software.
Syslog servers listen on UDP port 514 by default. Remember that UDP is a connectionless protocol, so the delivery of packets is not guaranteed. The default implementation of syslog is also not secure. However, you can secure it by running syslog over Transport Layer Security (TLS) and TCP port 6514. Regardless of whether you secure it or not, always place the syslog server behind your firewall and on the internal network.
Even on small networks, devices can generate huge numbers of syslog messages. Therefore, most syslog implementations store messages in a database for easier retrieval and analysis.
Finally, most syslog servers will have management software that you can use to view messages. The software should also have the ability to send the administrator a console message or text (or email) if a critical error is logged. Dozens of syslog packages are available. Some popular packages are Kiwi Syslog by SolarWinds (shown in Figure 8.4), Splunk, syslog-ng, and Syslog Watcher.
FIGURE 8.4 Kiwi Syslog
Whenever you visit a web page, you are making a connection from your device (the client) to a web server. To be more specific, a connection is requested by your Internet software (generally, a web browser) using the Hypertext Transfer Protocol Secure (HTTPS) of the TCP/IP protocol suite. Your client needs to know the IP address of the web server, and it will make the request on port 443.
The web server itself is configured with web hosting software, which listens for inbound requests on port 443. Two of the most common web server platforms are the open source Apache and Microsoft's Internet Information Services (IIS), although there are a few dozen different packages available for use. Web servers provide content on request, which can include text, images, and videos, and they can also do things like run scripts to open additional functions, such as processing credit card transactions and querying databases.
Individuals or independent companies can manage web servers, but more
often than not an Internet service provider or web hosting company that
manages hundreds or thousands of websites manages them. In fact, one web
server can be configured to manage dozens of smaller websites using the
same IP address, provided that it has sufficient resources to handle the
traffic. On the flip side, huge sites, such as Amazon.com
and Google, are
actually made up of multiple web servers acting as one site. It's
estimated that Google has over 900,000 servers, and Microsoft claims to
have over 1 million servers!
If a company wants to host its own web server, the best place for it is in the screened subnet. This configuration provides ease of access (after all, you want people to hit your web server) and the best security. The firewall can be configured to allow inbound port 443 requests to the screened subnet but not to allow inbound requests on those ports to make it to the internal corporate network.
Contrast this to a situation where the web server is on the internal network. The firewall then has to let inbound port 443 connections through to the internal network so that Internet-based clients can get to the web server. However, that also means that inbound requests on port 443 can be sent to all internal computers, including non-web servers and even client computers. Hackers could then potentially take advantage of exploits using port 443 to attempt to gain illegitimate access to the network.
The ultimate goal of a security system is to protect resources by keeping the bad people out and letting the good people in. It would be really easy to configure a system such that no one could access anything, and it would be equally simple to let everyone have open access. The first extreme defeats the purpose of having a network, and the second is just begging for trouble. The challenge then is to find a happy medium, where resources are available to those who should have them and nobody else.
In information security, there's a framework for access control known as triple A, meaning authentication, authorization, and accounting (AAA). Occasionally auditing is added to the mix, making it quad A. And even further, nonrepudiation, or the assurance that something can't be denied by someone, is also sometimes lumped in. Regardless, triple A is the umbrella term for describing systems of access control. AAA servers are gatekeepers and critical components to network security, and they can be implemented on a dedicated server machine, wireless router or access point, Ethernet switch, or a remote access server.
A common term that you will hear in the Windows Server world is domain controller, which is a centralized authentication server. Other types of servers that handle all aspects of AAA are Remote Access Service (RAS), Remote Authentication Dial-In User Service (RADIUS), Terminal Access Controller Access-Control System Plus (TACACS+), and Kerberos. Authentication servers may be stand-alone (e.g., a “Kerberos server”), or the authentication service may be built into a more well-known OS. For example, Windows Server uses Kerberos.
The AAA process will differ slightly between servers, but generally what happens is the user (or computer) trying to access the network presents credentials. If the credentials are deemed appropriate, the authentication server issues the user a security code or a ticket that grants them access to resources. When the owner of the security code or ticket tries to access a resource, authorization comes into play. And finally, accounting tracks all of it. In the following sections, we will describe the principles of authentication, authorization, and accounting.
To implement security, it's imperative to understand who or what is accessing resources on a computer or network. User authentication happens when the system being logged into validates that the user has proper credentials. Essentially, the authentication server asks the question, “Who are you?” Oftentimes, this is as simple as entering a username and password, but it could be more complex. There are two categories of authentication:
As mentioned already, most authentication systems require just a password, which is an example of something you know. If you forget your password, a website might ask you to provide answers to security questions that you selected when you registered. These are questions such as the name of your elementary school, father's middle name, street you grew up on, first car, favorite food, musical artist, and so forth.
One-time passwords can be generated by sites to give you a limited time window to log in. These are far more secure than a standard password because they are valid for only a short amount of time, usually 30 minutes or less. The password will be sent to you via text or email or possibly a phone call.
Something you have can be one of a few different things, such as a smartcard or a security token. A smartcard is a plastic card, similar in dimensions to a credit card, which contains a microchip that a card reader can scan, such as on a security system. Smartcards often double as employee badges, enabling employees to access employee-only areas of a building or to use elevators that go to restricted areas, or as credit cards.
Smartcards can also be used to allow or prevent computer access. For example, a PC may have a card reader on it through which the employee has to swipe the card, or that reads the card's chip automatically when the card comes into its vicinity. Or, they're combined with a PIN or used as an add-on to a standard login system to give an additional layer of security verification. For someone to gain unauthorized access, they have to know a user's ID and password (or PIN) and also steal their smartcard. That makes it much more difficult to be a thief!
A security token, like the one shown in Figure 8.5, displays an access code that changes about every 30 seconds. When received, it's synchronized with your user account, and the algorithm that controls the code change is known by the token as well as your authentication system. When you log in, you need your username and password, along with the code on the token.
FIGURE 8.5 RSA SecurID
Security tokens can be software-based as well. A token may be embedded in a security file unique to your computer, or your network may use a program that generates a security token much like the hardware token does. Figure 8.6 shows an example of PingID, which works on computers and mobile devices. This type of token saves you from having to carry around yet another gadget.
FIGURE 8.6 PingID
A system might also require you to log in from a specific location. For example, perhaps users are allowed to log in only if they are on the internal corporate network. Or, maybe you are allowed to connect from your home office. In that case, the security system would know a range of IP addresses to allow in based on the block of addresses allocated to your ISP. This is an example of somewhere you are.
Finally, the system could require something totally unique to you (something you are) to enable authentication. These characteristics are usually assessed via biometric devices, which authenticate users by scanning for one or more physical traits. Common types include fingerprint recognition, facial recognition, and retina scanning.
Once it's determined who the user is, the next step in access control is determining what the user can do. This is called authorization. Users are allowed to perform only specific tasks on specific objects based on what they are authorized to do. Most computers grant access based on a system of permissions, which are groups of privileges. For example, a user might be able to make changes to one file, whereas they are only allowed to open and read another.
One of the key foundations of an authorization system is the principle of least privilege. This states that users should be granted only the least amount of access required to perform their jobs, and no more. This principle applies to computers, files, databases, and all other available resources.
After users have been authenticated and authorized, it's time to think about tracking what the users did with their access. This is where accounting comes in. The principle of accounting seeks to keep a record of who accessed what and when, and the actions they performed.
The most common method of tracking user actions is through the use of logs. Nearly all operating systems have built-in logs that track various actions. For example, Windows-based systems contain Windows Logs, which are part of Event Viewer. To open Event Viewer, click Start and type Event. Click Event Viewer in Best Matches when it appears. Windows has logs that track application events, security events, and system events. Figure 8.7 shows the Security log. In an environment where multiple users log in, those logins will be shown here.
Another action that is frequently tracked is web browsing history. Web browsers retain a historical account of the sites that have been visited. To see viewing history in Microsoft Edge, click the Hub (it looks like a star, near the upper-right corner) and then History, as shown in Figure 8.8. There's an option to clear the history as well. Note that this action clears it from the browser, but it won't clear it from any servers (such as a proxy server) that caches web requests. To view the history in Chrome, click the More menu (the three vertical dots) and then History, or just open Chrome and press Ctrl+H.
FIGURE 8.7 Security log in Event Viewer
FIGURE 8.8 Microsoft Edge site-viewing history
The definition of an Internet appliance is a device that makes it easy to access the Internet. Taking a slightly broader view, Internet appliances can also help users safely access the Internet by protecting against some of the dangers that lurk there. The CompTIA A+ 220-1101 exam objectives list four items under Internet appliances: spam gateways, unified threat management (UTM), load balancers, and proxy servers. Let's take a look at each one.
Spam email is pervasive today. If you use email, it's likely you get spam, and a lot of it. What might be hard to believe is that you would get significantly more if there weren't antispam devices protecting you. A spam gateway is an appliance—most likely a software installation or virtual appliance—that blocks malicious emails from entering a network. They go by other names as well, such as antispam gateways (which sound more appropriate), spam blockers, and email gateways.
Antispam gateways can be located in two places: on the cloud or on an internal network, meaning internal to where the firewall is placed. The intent is that emails inbound to a corporate email server will first go through the gateway. If the gateway checks the email and verifies it's not spam, it passes the mail through to the server. Flagged emails get sent to a spam folder, quarantined, or deleted.
Emails that contain certain keywords, have malicious links, or come from domains that are known to send spam are most likely to get flagged. On occasion, legitimate emails will get flagged too, but these appliances get far more right than they get wrong.
Some spam gateways will also handle outbound emails. Hopefully your users aren't spammers, but if they are, the gateway will put an end to it. In most cases, a company sending spam email is accidental. A hacker could be sending emails from your company's domain by spoofing it or by other tricks which might not be immediately visible. A good spam gateway will block these outbound messages and notify the administrator.
The Internet is a wondrous place, but it's a scary one as well. It seems like for every video of puppies or kittens doing cute things, there are 10 hackers lurking in dark corners trying to steal identities or crash servers. It's an unfortunate reality of the Internet age. Software and hardware solutions have sprung up in response to various types of threats and managing all of them can be a challenge. For example, a network needs a firewall, antimalware and antispam software, and perhaps content filtering and intrusion prevention system (IPS) devices as well. It's a lot to deal with.
The goal of unified threat management (UTM) is to centralize security management, allowing administrators to manage all their security-related hardware and software through a single device or interface. For administrators, having a single management point greatly reduces administration difficulties. The downside is that it introduces a single point of failure. If all network security is managed through one device, a device failure could be problematic.
UTM is generally implemented as a stand-alone device (or series of devices) on a network, and it will replace the traditional firewall. A UTM device can generally provide the following types of services:
UTM has become quite popular in the last several years. Many in the industry see it as the next generation of firewalls.
Imagine you want to do some online shopping. You open your browser,
type amazon.com
into the
address bar, and the site appears. You've made a connection to the
Amazon server, right? But is there only one Amazon server? Considering
the millions of transactions Amazon completes each day, that seems
highly unlikely. In fact it's not the case. Amazon has dozens if not
hundreds of web servers, each of them capable of fulfilling the same
tasks to make your shopping experience as easy as possible. Each server
helps balance out the work for the website, which is called load
balancing.
Load-balancing technology can be implemented with local hardware or on the cloud. If implemented on a local network, a hardware device, conveniently named a load balancer, essentially acts like the web server to the outside world. When a user visits the website, the load balancer sends the request to one of many real web servers to fulfill the request. Cloud implementations have made load balancing easier to configure and expand, since the servers can be virtual instead of physical.
We already shared one example of load balancing with an online retailer. In that example, all servers are identical (or very close to identical) and perform the same tasks. Two other common load-balancing configurations are cross-region and content-based.
In a cross-region setup, all servers likely provide access to the same types of content, much like in our Amazon example. The big feature with this setup is that there are servers local to each region—proximity to the users will help speed up network performance. For example, say that a company has a geo-redundant cloud and users in North America, Asia, and Europe. When a request comes in, the load balancer senses the incoming IP address and routes the request to a server in that region. This is illustrated in Figure 8.9. If all servers in that region are too busy with other requests, then it might be sent to another region for processing.
FIGURE 8.9 Cross-region load balancing
Another common way to load-balance is to split up banks of servers to handle specific types of requests. For example, one group of servers could handle web requests, while a second set hosts streaming video and a third set manages downloads. This type of load balancing is called content-based load balancing and is shown in Figure 8.10.
FIGURE 8.10 Content-based load balancing
Load balancing has performance benefits for high-traffic networks and heavily used applications. Scalability and reliability are important benefits as well. Let's give a few examples of each.
Reliability Imagine a company that uses a business-critical application for remote salespeople. What happens if the server hosting that application crashes? It wouldn't be good.
With load balancing, different servers can host the application, even in different regions. Perhaps a hurricane wipes out the data center in Florida. The load balancer can direct users to other data centers in different regions, and the business can continue to generate revenue.
A proxy server makes requests for resources on behalf of a client. The most common one that you will see is a web proxy, but you might run into a caching proxy as well. Exercise 8.1 shows you where to configure your computer to use a web proxy server in Windows 10.
You can get to the same page shown in Figure 8.11 from within Edge. Follow these steps:
Click More Actions (the three horizontal dots in the upper-right corner) ➢ Settings ➢ View Advanced Settings ➢ Open Proxy Settings.
The Connections tab of the Internet Properties window will open.
FIGURE 8.12 Alternate method of configuring a proxy client
Enter the address in the Address box. (In Internet Explorer, click Tools ➢ Internet Options and then the Connections tab.)
The proxy settings apply to all browsers on the client computer, so you don't need to configure it in multiple places if you use multiple browsers.
Let's use an example of a web proxy to illustrate how the proxy server process works. The user on the client computer opens a web browser and types in a URL. Instead of the request going directly to that website, it goes to the proxy server. The proxy then makes the request of the website, and it returns the requested information to the client computer. If it sounds to you like this slows down Internet browsing, you're right—it does. But there are three strong potential benefits to using a proxy.
First, the proxy server can cache the information requested, speeding up subsequent searches. (This is also the only function of a caching proxy, but caching-only proxies are most commonly configured to work on a local intranet.) Second, the proxy can act as a filter, blocking content from prohibited websites. Third, the proxy server can modify the requester's information when passing it to the destination, blocking the sender's identity and acting as a measure of security; the user can be made anonymous.
Keep in mind that if all of the traffic from a network must pass through a proxy server to get to the Internet, that can really slow down the response time. Make sure the proxy or proxies have ample resources to handle all the requests. Figure 8.13 shows an example of a proxy server on a network.
FIGURE 8.13 A proxy server on a network
In regular human terms, legacies are considered a good thing. Most of us want to leave a legacy of some kind, whether it's within a community or organization or within our own families. A legacy is something that lives on far beyond a human life span.
If you mention the term legacy system in the computer world, though, you are likely to be met with groans and eye rolls. It means that the system is old and hopelessly outdated by today's computing standards. Legacy systems are usually defined as those using old technology in one or more of the following areas:
Many legacy systems were state of the art when they were originally implemented in the 1970s or 1980s, but they haven't been upgraded or replaced. Today, though, they are old and slow, and specialized knowledge is required to maintain and operate them. For example, someone might need to know the Pick operating system (which came out in the 1970s), how to operate an IBM AS/400 or manage VAX, or how to configure the IPX/SPX network protocol. (Google these topics sometime!)
It's not just the really old stuff, though. Even technologies that have been invented after the turn of the century can now be considered legacy. For example, Microsoft no longer supports the Windows XP and Windows 7 operating systems. Security and other updates will no longer be provided for these OSs, which could introduce security risks. On the hardware side, the wireless networking 802.11b standard is definitely legacy today, and if 802.11g isn't already legacy, it should be. It's really hard to find new components that include these standards.
So why don't companies replace legacy systems? It's complicated:
Furthermore, it's challenging to find technicians and consultants who understand legacy systems. People move from company to company, or consultants retire and take their specialized knowledge with them. Someone who was a mid-20s computer whiz in 1975 is now in their 70s and probably retired. The cost to find someone knowledgeable on these systems can be high.
Speaking of high costs, finding replacement hardware can be difficult to impossible while being expensive at the same time. Eventually, the cost of maintenance might outweigh the cost of upgrading, but then again it might not.
A great example of critical legacy systems is a category known as supervisory control and data acquisition (SCADA). SCADA systems are high-level management systems that are used to control manufacturing machines and processes; manage large-scale infrastructure settings, such as power grids, oil and gas pipelines, and water treatment facilities; and run components in buildings, such as heating and air conditioning. In other words, they're everywhere, and they manage some very important things. But, most SCADA systems are extremely old and were designed to be open access, so they are huge security holes. Hackers have been exploiting those holes faster than developers have been able to patch them.
So what's a network administrator to do? If possible, replacing or repurposing legacy systems can provide long-term benefits to a company. But also recognize the risk involved. If replacement isn't an option, then the best advice that we can give is to learn as much as you can about them. Hopefully, the system is based on established standards, so you can look them up on the Internet and learn as much as possible. If not, see what operating manuals you can track down, or pick the brain of those who understand how they operate. As challenging as legacy systems can be, you can make yourself quite valuable by being the expert on them.
A common administrative option is to try to isolate the legacy system as much as possible so that its lack of speed doesn't affect the rest of the network. This is usually much easier to do with hardware or protocols than with software. For example, the network might be set up with one segment that hosts the legacy systems or protocols.
One technology that is helping replace and update legacy systems is virtualization, which can obviate the need for one-to-one hardware-to-software relationships. We cover virtualization later in the chapter, in the “Concepts of Virtualization” section.
The Internet of Things (IoT) has been growing rapidly for the last several years. Home automation systems and IoT-based security systems are becoming increasingly popular and will likely continue to become more pervasive. IoT also has a place in the manufacturing world, as companies seek to contain costs.
IoT networks will often have a central controller or coordinating device, like a computer switch but dedicated specifically to IoT devices. The administrator will have an app, usually on a smartphone, that provides them with a Wi-Fi or Bluetooth connection to the controller. Settings the administrator enters are then communicated to the end devices via the controller. Many IoT devices can be configured manually as well, but what's the fun in that? One of the key features of IoT is to control devices and see their status without needing to be physically present at each one. It could take an entire book to cover the different types of IoT devices, but we'll look at few popular ones here.
A thermostat is a device connected to a heating or cooling system that allows users to control the temperature of a room, home, or building. Programmable thermostats that allow you to set the temperature based on time and day have been around for more than 20 years. The next step in the evolution is a smart thermostat (shown in Figure 8.14) that's remotely accessible and can do some basic learning based on the weather and your preferences. Smart thermostats usually have a touch screen, often have their own app, and can be controlled by a central coordinator.
FIGURE 8.14 Nest smart thermostat
By Raysonho @ Open Grid Scheduler/Grid Engine. CC0, https://commons.wikimedia.org/w/index.php?curid=49900570
Smart thermostats have the options you would expect, such as setting the temperature, detecting humidity levels, and configuring schedules to save energy when no one is home. The advanced features are what make these devices especially interesting. Examples include:
Different models offer different features, so you're sure to find one that meets your needs as well as your budget.
Home security systems have been around for a long time, and security cameras are an integral part of them. With IoT-based cameras, security systems are more customizable and smarter than ever.
The security camera can be stand-alone but most often is part of a series of cameras connected to a home security system. That central system will often have a touch screen as well as an app. The system can be activated in several ways, such as by the ringing of an integrated doorbell or by motion sensors. Most systems will provide for the recording of video in the event that they're triggered by motion. Video footage may be stored on local storage, such as an SD card, or on the cloud. Configuration options often include when to activate (for example, when motion is detected), how to notify the user (such as a text or email), and how long to record and store footage.
Home security and automation systems may also control door locks and light switches.
Smart door locks are a feature often integrated into home security systems. They will typically be accompanied by a camera and linked to a doorbell. Many will also have a number pad so that you can enter a code to unlock the door instead of needing a key. Figure 8.15 shows a Schlage smart door lock.
In addition to offering security, smart door locks can provide convenience. For example, perhaps you are expecting a delivery. Someone rings the doorbell. With a smart security system that includes a doorbell and door locks, you can instantly see who it is on your smartphone and talk to them. If you feel comfortable, you can unlock the door and tell them to place the package inside. Once they are done, you can lock the door again. Or, you can tell them to set the package down and you'll come get it in a few minutes, whether you're on the other side of the house or the other side of the country.
FIGURE 8.15 Schlage smart door lock
Smart light switches help control lights in the house. Many are designed to replace existing light switches in the wall, whereas others simply mount to the wall. An example of a Lutron switch is shown in Figure 8.16. In addition to having manual controls, many will have their own app or can be controlled through a coordinator.
Features of a smart light switch are fairly straightforward. They can turn lights on or off and dim the lights. They can perform tasks based on a schedule, and some have geofencing or motion sensors to detect when someone enters a room. Some brands will work only with certain types of lights, so make sure to check compatibility.
Smartphones ushered in the widespread use of voice-enabled digital assistants. It started with Siri on the iPhone, and Google Now (“Okay, Google”) soon followed for the Android OS. Microsoft even got into the act with Cortana, which was used with its now defunct Windows Phone OS and is also integrated into Windows 10 and Windows 11. Amazon wanted in on the act, too, but they don't have a smartphone OS. So instead, they created a voice-enabled smart speaker called the Echo with a virtual assistant known simply as Alexa. Google, not to be outdone in the digital assistant market, created Google Home, which uses Google Assistant, which evolved from Google Now. The market for these devices is very competitive.
FIGURE 8.16 Lutron smart light switch
The first smart speakers on the market were Wi-Fi–enabled speakers that would listen for you to activate them. Once you said their name, they would listen to your question and use their Internet connection to perform a search and deliver an answer. They were incredibly handy for asking the weather for the day, to play a song, or to answer an obscure trivia question at a party. Newer smart speakers may have integrated video screens as well, up to about 8" or 10" in size. This makes them larger and more conspicuous but lets users view their integrated security cameras, see music videos and movies, and more.
While the features of a smart speaker/digital assistant are appealing to many, there are some potential concerns and risks as well. For example, unless you turn it off, the device is always listening. This is unnerving to some people, and others have even suggested it could be used to eavesdrop on its users. Device manufacturers have gone to lengths to protect users' privacy but know that this could be an issue.
A famous story of misuse comes from late-night television. During a late-night show in 2017, comedian Jimmy Kimmel decided to pull a prank on all of his viewers who had an Amazon Echo or similar device. With the audience quiet, he loudly instructed Alexa (the name that the Amazon digital assistant responds to) to order $500 worth of foam swim “pool noodles.” Urban legend has it that multiple viewers were affected by this and sent the product. (To be fair, there are conflicting reports of if it actually worked or not. If it did, Amazon would have accepted the return and reversed the charge.) Of course, this problem could be avoided by turning off automatic voice ordering from Amazon, which is a feature of its digital assistants. Finally, some flaky smart speaker units have been known to speak when not spoken to or even to start laughing for no apparent reason. While this is definitely creepy, it's not necessarily a security threat.
The computer industry is one of big trends. A new technology comes along and becomes popular, until the next wave of newer, faster, and shinier objects comes along to distract everyone from the previous wave. Thinking back over the past 20 to 30 years, there have been several big waves, including the rise of the Internet, wireless networking, and mobile computing.
Within each trend, there are often smaller ones. For example, the Internet was helped by modems and ISPs in the middle of the 1990s, and then broadband access took over. Wireless networking has seen several generations of faster technology, from the 11 Mbps 802.11b, which at the time it came out was pretty cool, to 802.11ax, delivering gigabit wireless. Mobile computing has been a long-lasting wave, first with laptop computers becoming more popular than desktops, and then with handheld devices (namely, smartphones and tablets) essentially functioning like computers.
The biggest recent wave in the computing world is cloud computing. Its name comes from the fact that the technology is Internet-based; in most computer literature, the Internet is represented by a graphic that looks like a cloud. It seems like everyone is jumping on the cloud (pun intended, but doesn't that sound like fun?), and technicians need to be aware of what it can provide and its limitations. The most important core technology supporting cloud computing is virtualization. We will cover both topics in the following sections.
You hear the term a lot today—the cloud. What exactly is the cloud? The way it's named, and it's probably due to the word the at the beginning, it sounds like it's one giant, fluffy, magical entity that does everything you could ever want a computer to do. Only it's not quite that big, fluffy, or magical, and it's not even one thing.
Cloud computing is a method by which you access remote servers to store files or run applications for you. There isn't just one cloud—there are hundreds of commercial clouds in existence today. Many of them are owned by big companies, such as Microsoft, Google, HP, Apple, and Amazon. Basically, they set up the hardware and/or software for you on their network, and then you use it.
Using the cloud sounds pretty simple, and in most cases it is. From the administrator's side, though, things can be a little trickier. Cloud computing involves a concept called virtualization, which means that there isn't necessarily a one-to-one relationship between a physical server and a logical (or virtual) server. In other words, there might be one physical server that virtually hosts cloud servers for a dozen companies, or there might be several physical servers working together as one logical server. From the end user's side, the idea of a physical machine versus a virtual machine doesn't even come into play, because it's all handled behind the scenes. We'll cover virtualization in more depth later in this chapter.
There are many advantages to cloud computing, and the most important ones revolve around money. Cloud providers can get economies of scale by having a big pool of resources available to share among many clients. It may be entirely possible for them to add more clients without needing to add new hardware, which results in greater profit. From a client company's standpoint, the company can pay for only the resources it needs without investing large amounts of capital into hardware that will be outdated in a few years. Using the cloud is often cheaper than the alternative. Plus, if there is a hardware failure within the cloud, the provider handles it. If the cloud is set up right, the client won't even know that a failure occurred. Other advantages of cloud computing include fast scalability for clients and ease of access to resources regardless of location.
The biggest downside of the cloud has been security. The company's data is stored on someone else's server (off premises), and company employees are sending it back and forth via the Internet. Cloud providers have dramatically increased their security over the last several years, but this can still be an issue, especially if the data is highly sensitive material or personally identifiable information (PII). Also, some companies don't like the fact that they don't own the assets.
Now let's dive into the types of services clouds provide, the types of clouds, cloud-specific terms with which you should be familiar, and some examples of using a cloud from the client side.
Cloud providers sell everything “as a service.” The type of service is named for the highest level of technology provided. For example, if computing and storage is the highest level, the client will purchase infrastructure as a service. If applications are involved, it will be software as a service. Nearly everything that can be digitized can be provided as a service. Let's take a look at the three most common types of services offered by cloud providers, from the ground up:
Figure 8.17 shows examples of these three types of services. SaaS is the same as the Application layer shown in the figure.
The level of responsibility between the provider and the client is specified in the contract. It should be very clear which party has responsibility for specific elements, should anything go awry.
FIGURE 8.17 Common cloud service levels
“Cloud computing” by Sam Johnston. Licensed under CC BY-SA 3.0 via
Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Cloud_computing.svg#/media/File:Cloud_computing.svg
Running a cloud is not restricted to big companies offering services over the Internet. Companies can purchase virtualization software to set up individual clouds within their own network. That type of setup is referred to as a private cloud. Running a private cloud pretty much eliminates many of the features that companies want from the cloud, such as rapid scalability and eliminating the need to purchase and manage computer assets. The big advantage, though, is that it allows the company to control its own security within the cloud.
The traditional type of cloud that usually comes to mind is a public cloud, like the ones operated by the third-party companies we mentioned earlier. These clouds offer the best in scalability, reliability, flexibility, geographical independence, and cost effectiveness. Whatever the client wants, the client gets. For example, if the client needs more resources, it simply scales up and uses more. Of course, the client will also pay more, but that's part of the deal.
Some clients have chosen to combine public and private clouds into a hybrid cloud. This gives the client the great features of a public cloud while simultaneously allowing for the storage of more sensitive information on the private cloud. It's the best of both worlds.
The last type of cloud to discuss is a community cloud. These are created when multiple organizations with common interests combine to create a cloud. In a sense, it's like a public cloud but with better security. The clients know who the other clients are and, in theory, can trust them more than they could trust random people on the Internet. The economies of scale and flexibility won't be as great as with a public cloud, but that's the trade-off for better security.
We've discussed several important cloud features to this point. The National Institute of Standards and Technology (NIST), a group within the U.S. Department of Commerce, has defined the following five essential characteristics of cloud computing:
In addition to those characteristics, the A+ exam objectives list two more: file synchronization and high availability. File synchronization is straightforward enough—it makes sure that the most current copy is on the cloud as well as on a local device. If changes are made to one, the other copy gets updated accordingly. High availability refers to uninterrupted and responsive service. When we say uninterrupted, though, we should probably clarify that by saying it's mostly uninterrupted. The level of uptime guaranteed by the cloud service provider (CSP) will be specified in a document called the service level agreement (SLA).
Service availability is measured in terms of “nines,” or how many nines of uptime the provider guarantees. For example, “three nines” means that the service will be available 99.9 percent of the time, whereas “four nines” will be up 99.99 percent of the time. More nines means more money, and different aspects of your service contract might require different levels of uptime. For example, a critical medical records database might need more guaranteed uptime than would a word processing application. The level of service you should get depends on how much risk your company is willing to take on and the trade-off with cost. Table 8.3 shows how much downtime is acceptable based on the number of nines of guaranteed uptime.
Availability | Downtime per year | Downtime per day |
---|---|---|
Three nines (99.9%) | 8.77 hours | 1.44 minutes |
Four nines (99.99%) | 52.6 minutes | 8.64 seconds |
Five nines (99.999%) | 5.26 minutes | 864 milliseconds |
Six nines (99.9999%) | 31.56 seconds | 86.4 milliseconds |
TABLE 8.3 Availability downtime
Guaranteeing that services will be available with the possible exception of less than one second per day seems pretty impressive, as is the case with five nines. You might see other combinations, too, such as “four nines five,” which translates into 99.995 percent availability, or no more than 4.32 seconds of downtime per day. The majority of CSPs will provide at least three nines or three nines five.
Up to this point, we have primarily focused on the characteristics that make up cloud computing. Now let's turn our attention to some practical examples with which users will probably be more familiar. The two types of cloud interaction we will cover are storage and applications. The next two sections will assume the use of public clouds and standard web browser access.
Storage is the area in which cloud computing got its start. The idea is simple—users store files just as they would on a hard drive, but with two major advantages. One, they don't need to buy the hardware. Two, different users can access the files regardless of where they are physically located. Users can be located in the United States, China, and Germany; they all have access via their web browser. This is particularly helpful for multinational organizations.
There is no shortage of cloud-based storage providers in the market today. Each provider offers slightly different features. Most of them will offer limited storage for free and premium services for more data-heavy users. Table 8.4 shows a comparison of some of the better-known providers. Please note that the data limits and prices can change; this table is provided for illustrative purposes only and doesn't include every level of premium service available. Most of these providers offer business plans with unlimited storage as well for an additional cost.
Service | Free | Premium | Cost per year |
---|---|---|---|
Dropbox | 2 GB | 3 TB | $199 |
Apple iCloud | 5 GB | 2 TB | $120 |
Box | 10 GB | 100 GB | $60 |
Microsoft OneDrive | 5 GB | 100 GB | $24 |
Google Drive | 15 GB | 2 TB | $100 |
TABLE 8.4 Cloud providers and features
Which one should you choose? If you want extra features, such as web-based applications, then Google or Microsoft is probably the best choice. If you just need data storage, then Box or Dropbox might be a better option. Some allow multiple users to access a personal account, so that might figure into your decision as well.
Most cloud storage providers offer synchronization to the desktop, which makes it so that you have a folder on your computer, just as if it were on your hard drive. And it's important to note that folder will almost always have the most current edition of the files stored in the cloud. The synchronization app typically runs in the background and has configurable options, including what, when, and how often to synchronize.
Accessing the sites is done through your web browser. Once you are in the site, managing your files is much like managing them on your local computer. In Figure 8.18, you can see the Google Drive interface, with a few files and folders in it.
You have a few options for sharing a folder with another user. One way is to right-click the folder and choose Share. You'll be asked to enter their name or email address and to indicate whether they can view or edit the file (Figure 8.19). You can also choose Get Link, which will copy a URL link to your Clipboard to paste into a message. Be mindful that the default behavior is that anyone who has the link can view the folder. This might be okay, but it could also be a security risk. You can change the sharing settings by performing the following steps:
In the Get Link window that appears (Figure 8.20), click the down arrow next to Anyone With The Link.
Options include:
FIGURE 8.18 Google Drive
FIGURE 8.19 Sharing a folder on Google Drive
FIGURE 8.20 Share with others settings
Google really popularized the use of web-based applications. After all, the whole Chromebook platform, which has been very successful, is based on this premise. Other companies have gotten into the cloud-based application space as well, such as Microsoft with Office 365. The menus and layout are slightly different from PC-based versions of Office, but if you're familiar with Office, you can easily use Office 365—and all of the files are stored on the cloud.
Cloud-based apps run through your web browser. This is great for end users for a couple of reasons. One, your system does not have to use its own hardware to run the application; you are basically streaming a virtual application. Two, different client OSs can run the application (usually) without worrying about compatibility issues. Applications can often work across platforms as well, meaning that laptops, desktops, tablets, and smartphones can all use various apps.
To create a new document using Google Docs, you click the New button, as shown on the left side of Figure 8.18, and then choose the application from the menu. If you choose Google Docs, it opens a new browser window with Google Docs, as shown in Figure 8.21. Notice that near the top, it says Saved to Drive. When it says this, you know that the document has been saved automatically.
FIGURE 8.21 Google Docs
When choosing a cloud provider, you may use any one you like. In fact, it's better if you experience the differences in how providers store files and let you manage and manipulate them before making your choice. Exercise 8.2 will give you experience with using cloud-based storage and applications—specifically, Google Drive and its associated apps. This exercise will work best if you have someone you can work with. For example, in a classroom setting, you can partner with someone. If you are studying at home, you can create multiple accounts and get the same experience. You will just need to log out and in with your other account to see the shared files.
The newest trend in web applications and cloud storage is the streaming of media. Companies such as Netflix, Amazon, Pandora, Apple, and others store movies and music on their clouds. You download their client software, and for a monthly subscription fee, you can stream media to your device. It can be your phone, your tablet, your computer, or your home entertainment system. Before the advent of broadband network technologies, this type of setup would have been impossible, but now it is poised to become the mainstream way that people receive audio and video entertainment.
Perhaps the easiest way to understand virtualization is to compare it to more traditional technologies. In the traditional computing model, a computer is identified as being a physical machine that is running some combination of software, such as an operating system and various applications. There's a one-to-one relationship between the hardware and the operating system.
For the sake of illustration, imagine that a machine is a file server and now it needs to perform the functions of a web server as well. To make this happen, the administrator would need to ensure that the computer has enough resources to support the service (CPU, memory, network bandwidth), install web server software (Microsoft Internet Information Services [IIS] or Apache HTTP Server, for example), configure the appropriate files and permissions, and then bring it back online as a file and web server. These would be relatively straightforward administrative tasks.
But now imagine that the machine in question is being asked to run Windows Server and Linux at the same time. Now there's a problem. In the traditional computing model, only one OS can run at one time, because each OS completely controls the hardware resources in the computer. Sure, an administrator can install a second OS and configure the server to dual-boot, meaning the OS to run is chosen during the boot process, but only one OS can run at a time. So if the requirement is to have a Windows-based file server and a Linux-based Apache web server, there's a problem. Two physical computers are needed.
Similarly, imagine that there is a Windows-based workstation being used by an applications programmer. The programmer has been asked to code an app that works in Linux, or Apple's iOS, or anything other than Windows. When the programmer needs to test the app to see how well it works, what do they do? Sure, they can configure their system to dual-boot, but once again, in the traditional computing model, they are limited to one OS at a time per physical computer. Their company could purchase a second system, but that quickly starts to get expensive when you have multiple users with similar needs.
This is where virtualization comes in. The term virtualization is defined as creating virtual (rather than actual) versions of something. In computer jargon, it means creating virtual environments where “computers” can operate. We use quotation marks around the word computers because they don't need to be physical computers in the traditional sense. Virtualization is often used to let multiple OSs (or multiple instances of the same OS) run on one physical machine at the same time. Yes, they are often still bound by the physical characteristics of the machine on which they reside, but virtualization breaks down the traditional one-to-one relationship between a physical set of hardware and an OS.
We have already hit on the major feature of virtualization, which is breaking down that one-to-one hardware and software barrier. The virtualized version of a computer is appropriately called a virtual machine (VM). Thanks to VMs, it is becoming far less common to need dual-boot machines today than in the past. In addition, VMs make technology like the cloud possible. A cloud provider can have one incredibly powerful server that is running five instances of an OS for client use, and each client is able to act as if it had its own individual server. On the flip side, cloud providers can pool resources from multiple physical servers into what appears to the client to be one system, effectively giving clients unlimited processing or storage capabilities (assuming, of course, that the provider doesn't physically run out of hardware).
Virtual machines have a wide variety of applications, many of which are cloud-based services. Here are three specific uses you should pay particular attention to:
Virtual Sandbox Imagine a scenario where you have an application that you want to test out in an OS, but you don't want any negative effects to happen to the computer system doing the testing. One way to do this is to test the app in a sandbox, which is a temporary, isolated desktop environment. Think of it as a temporary, somewhat limited virtual machine. Any app in the sandbox will act as it would in a full version of the chosen OS, with one big difference. Files are not saved to the hard drive or memory, so the physical machine should never be affected by anything the app in the sandbox does. When the sandbox gets shut down, so does the app and any data associated with it.
There are several sandboxing software solutions on the market, including Sandboxie, Browser in the Box, BufferZone, SHADE Sandbox, and ToolWiz Time Freeze. Some of them are designed for app testing, whereas others will literally sandbox your whole system unless you authorize changes to specific files on the computer. Last but not least, Microsoft is in on the action with Windows Sandbox as well.
Application Virtualization Application virtualization is a common use of virtual machines as well. It usually takes one of two forms. The first is virtualizing legacy software or a legacy OS. We introduced legacy software earlier in the chapter—it's basically old, outdated software. The problem is that legacy apps often only run on legacy OSs, so you need to either virtualize the app in a newer OS and tweak it like crazy to get it to work, or virtualize it in an older OS that has no business running on its own server either.
The second use is cross-platform virtualization. It allows programs coded for one type of hardware or operating system to work on another that it's not designed to work on. For example, an app designed for macOS could work in a virtualized version of that OS within a Windows-based server.
The underlying purpose of all of this is to save money. Cloud providers can achieve economies of scale, because adding additional clients doesn't necessarily require the purchase of additional hardware. Clients don't have to pay for hardware (or the electricity to keep the hardware cool) and can pay only for the services they use. End users, in the workstation example we provided earlier, can have multiple environments to use without needing to buy additional hardware as well.
The key enabler for virtualization is a piece of software called the hypervisor, also known as a virtual machine manager (VMM). The hypervisor software allows multiple operating systems to share the same host, and it also manages the physical resource allocation to those virtual OSs. As illustrated in Figure 8.24, there are two types of hypervisors: Type 1 and Type 2.
A Type 1 hypervisor sits directly on the hardware, and because of this, it's sometimes referred to as a bare-metal hypervisor. In this instance, the hypervisor is basically the operating system for the physical machine. This setup is most commonly used for server-side virtualization, because the hypervisor itself typically has very low hardware requirements to support its own functions. Type 1 is generally considered to have better performance than Type 2, simply because there is no host OS involved and the system is dedicated to supporting virtualization. Virtual OSs are run within the hypervisor, and the virtual (guest) OSs are completely independent of each other. Examples of Type 1 hypervisors include Microsoft Hyper-V, VMware ESXi, and Citrix Hypervisor (formerly XenServer). Figure 8.25 shows the Hyper-V interface. Exercise 8.3 walks you through the steps to enable Hyper-V in Windows 10.
FIGURE 8.24 Type 1 and Type 2 hypervisors
FIGURE 8.25 Microsoft Hyper-V
A Type 2 hypervisor sits on top of an existing operating system, called the host OS. This is most commonly used in client-side virtualization, where multiple OSs are managed on the client machine as opposed to on a server. An example of this would be a Windows user who wants to run Linux at the same time as Windows. The user could install a hypervisor and then install Linux in the hypervisor and run both OSs concurrently and independently. The downsides of Type 2 hypervisors are that the host OS consumes resources, such as processor time and memory, and a host OS failure means that the guest OSs fail as well. Examples of Type 2 hypervisors include Microsoft's Windows Virtual PC and Azure Virtual Server, Oracle VM VirtualBox, VMware Workstation, and Linux KVM.
As you might expect, running multiple OSs on one physical workstation can require more resources than running a single OS. There's no rule that says a workstation being used for virtualization is required to have more robust hardware than another machine, but for performance reasons, the system should be well equipped. This is especially true for systems running a Type 2 hypervisor, which sits on top of a host OS. The host OS will need resources, too, and it will compete with the VMs for those resources. Let's talk about specific requirements.
The primary resources here are the same as you would expect when discussing any other computer's performance: CPU, RAM, hard drive space, and network performance. From the CPU standpoint, know that the hypervisor can treat each core of a processor as a separate virtual processor, and it can even create multiple virtual processors out of a single core. The general rule here is that the faster the processor the better, but really, the more cores a processor has, the more virtual OSs it can support in a speedy fashion. Within the hypervisor, there will most likely be an option to set the allocation of physical resources, such as CPU priority and amount of RAM, to each VM.
Some hypervisors require that the CPU be specifically designed to support virtualization. For Intel chips, this technology is called virtualization technology (VT), and AMD chips need to support AMD-V. Pretty much every processor today supports virtualization, but you might run across older ones that do not. In addition, many system BIOSs/UEFIs have an option to turn on or turn off virtualization support. If a processor supports virtualization but the hypervisor won't install, check the BIOS/UEFI and enable virtualization. The specific steps to do this vary based on the BIOS/UEFI, so check the manufacturer's documentation. An example of what this might look like is shown in Figure 8.27.
Memory is always a big concern for computers, and virtual ones are no different. When you're installing the guest OS, the hypervisor will ask how much memory to allocate to the VM. This can be modified later if the guest OS in the VM requires more memory to run properly. Always remember, though, that the host OS requires RAM, too. Thus, if the host OS needs 4 GB of RAM and the guest OS needs 4 GB of RAM, the system needs to have at least 8 GB of RAM to support both adequately.
FIGURE 8.27 Enabling virtualization in the BIOS/UEFI
Hard disk space works the same way as RAM. Each OS will need its own hard disk space, and the guest OS will be configured via the hypervisor. Make sure that the physical computer has enough free disk space to support the guest OSs.
Finally, from a networking standpoint, each of the virtual desktops will typically need full network access, and configuring the permissions for each can sometimes be tricky.
The virtual desktop is often called a virtual desktop infrastructure (VDI), a term that encompasses the software and hardware needed to create the virtual environment. VDIs can be on premises in the same building as the company using it, or in the cloud. The VM will create a virtual NIC and manage the resources of that NIC appropriately. The virtual NIC doesn't have to be connected to the physical NIC; an administrator could create an entire virtual network within the virtual environment where the virtual machines just talk to each other.
That's not normally practical in the real world, though, so the virtual NIC will be connected to the physical NIC. Configuring a virtual switch within the hypervisor normally does this. The virtual switch manages the traffic to and from the virtual NICs and logically attaches to the physical NIC (see Figure 8.28). Network bandwidth is often the biggest bottleneck when you are running multiple virtual OSs. If the network requirements are that each of four VMs on one physical machine get Gigabit Ethernet service all the time, one physical Gigabit Ethernet NIC isn't going to cut it.
FIGURE 8.28 Virtual NICs connecting to a physical NIC
Virtual machines are created to exist and function just like a physical machine. Thus, all the requirements that a physical machine would have need to be replicated by the hypervisor, and that process is called emulation. The terms hypervisor and emulator are often used interchangeably, although they don't mean the same thing. The hypervisor can support multiple OSs, whereas technically, an emulator appears to work the same as one specific OS. As for requirements, the emulator and the hypervisor need to be compatible with the host OS. That's about it.
In the early days of the cloud, a common misconception was that virtual machines couldn't be hacked. Unfortunately, some hackers proved this wrong. Instead of attacking the OS in the VM, hackers have turned their attention to attacking the hypervisor itself. Why just hit one OS when you can hit all of them on the computer at the same time? A number of virtualization-specific threats focusing on the hypervisor have cropped up, but updates have fixed the issues as they have become known. The solution to most virtual machine threats is to always apply the most recent updates to keep the system(s) current.
At the same time, all the security concerns that affect individual computers also apply to VMs. For example, if Windows is being operated in a VM, that instance of Windows still needs antimalware software installed on it.
Now that we have covered the key concepts behind client-side virtualization, it's time to practice. Exercise 8.4 walks you through installing the Oracle VirtualBox hypervisor on a Windows 10 computer and then installing Lubuntu (a distribution of Linux). Normally, installing a second OS involves a relatively complicated process where you need to dual-boot your computer. You're not going to do that here. Instead, you will use the VirtualBox hypervisor that allows you to create a new virtual system on your hard drive and not affect your existing Windows installation. We promise you that this exercise will not mess up Windows on your computer! And when you're finished, you can just uninstall VirtualBox, if you want, and nothing will have changed on your system. This exercise is admittedly a bit long because there are a lot of steps, and it's also probably more “advanced” than typical A+ materials. That said, we encourage you to try it—it usually ends up being one of our students' favorite exercises during training classes.
In this chapter, you learned about different server roles and technologies that work on local networks as well as ones that work on the Internet to make the cloud possible.
First, you learned about specific server roles. Options include DNS, DHCP, file (fileshare), print, mail, syslog, web, and AAA servers. We talked about what each one of these does as well as where they should be located on the network, either inside the secure network or in the screened subnet. In addition to servers, many networks will have Internet appliances dedicated to security, such as spam gateways and UTM devices, as well as load balancers and proxy servers to manage traffic. Some networks also support legacy or embedded systems, such as SCADA. Although these systems are old and outdated, they often provide critical functionality on the network. We also looked at some examples of IoT devices.
The next topic was cloud computing. Cloud computing has been one of the hottest topics in IT circles for several years now and will likely continue to be so for several more years. Cloud providers sell several different types of services, such as IaaS, PaaS, and SaaS. There are also different types of clouds, such as public, private, community, and hybrid. Cloud features include shared resources, metered utilization, rapid elasticity, high availability, and file synchronization. You can use cloud services for storage, virtual applications (such as email or word processing), or both. Cloud computing is dependent on virtualization.
Virtualization removes the barrier of there being a one-to-one relationship between computer hardware and an operating system. You learned about what virtualization does and the core piece of software, called the hypervisor. You learned the purpose of virtual machines, which includes sandboxing, test development, and application virtualization for legacy software and OSs and cross-platform virtualization, as well as requirements for client-side virtualization. The chapter finished with a long exercise on installing a hypervisor and Lubuntu on a Windows computer.
Know the various roles that servers can play on a network. Roles include DNS, DHCP, file (fileshare), print, mail, syslog, web, and AAA servers. File servers (fileshares) store files for users, and may have optical media and perform backups too. Print servers host printers. Mail servers store, send, and receive email. A syslog server is used to log system events. Web servers host web pages that users access across a network. AAA servers validate user credentials, and then allow users to access resources and track access.
Know what DNS servers do. DNS servers resolve hostnames to IP addresses. Without DNS servers, finding your favorite websites on the Internet would be an incredibly challenging task. DNS servers have a zone file with hostname to IP address mappings.
Understand how DHCP servers work. DHCP servers assign IP addresses and configuration information to client computers. Clients request the information via broadcast. Each DHCP server has a scope with a configured range of available IP addresses. The server may also provide additional configuration information, such as the address of the default gateway (a router) and a DNS server.
Understand what spam gateways and UTM systems do. Spam gateways help email servers detect and deal with unwanted spam email. Unified threat management (UTM) systems centralize security management and often replace traditional firewalls.
Know what load balancers and proxy servers do. Both types of servers help manage network traffic. Load balancers do so by sending incoming requests to different, typically identical servers to spread out the workload. Proxy servers make requests on behalf of clients.
Know what legacy and embedded systems are. Legacy systems are older technology no longer supported by the manufacturer. Embedded systems are those that are critical to a process. SCADA is an example of a legacy and embedded system.
Know some examples of services provided by Internet of Things (IoT) devices. Some device types include thermostats, home security and automation, and voice-enabled speakers and digital assistants.
Understand the four different types of clouds. A cloud can be public, private, hybrid, or community.
Know the differences between SaaS, IaaS, and PaaS. All of these are cloud terms. In infrastructure as a service (IaaS), the provider supplies the network infrastructure. In platform as a service (PaaS), software development tools and platforms are also provided. The highest level is software as a service (SaaS), where the provider supplies the infrastructure and applications.
Understand cloud concepts of shared resources, metered utilization, rapid elasticity, high availability, and file synchronization. All clouds use shared resources, which can be internal or external. A pool of resources is purchased and each participant in the cloud pays for part of it. Metered utilization shows how much a client has used and will be billed for. Rapid elasticity means that a client can quickly get more (or fewer) resources as needed. High availability means that services are always or almost always available, such as three nines five. Cloud file storage services include iCloud and Google Cloud, and in most cases have synchronization apps to sync to the mobile device.
Understand the purpose of virtual machines and what they require. Virtual machines are designed to save providers and users money. They allow for multiple OSs to be installed on one computer. VMs can provide sandboxing, test development, and application virtualization for legacy software and OSs and cross-platform virtualization. A virtual machine requires certain levels of resources, an emulator, security, and a network connection.
Understand what virtual desktops and virtual NICs are. A virtual desktop is the collection of software and hardware needed to create a virtual environment. Sometimes it's called a virtual desktop infrastructure (VDI). The virtual NIC, which is controlled by the virtual machine, controls access to other virtual machines on the same system as well as access to the physical NIC.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answer compares to the authors', refer to Appendix B.
Describe the steps needed to enable Hyper-V in Windows 10.
THE FOLLOWING COMPTIA A+ EXAM 220-1101 OBJECTIVES ARE COVERED IN THIS CHAPTER:
In 1943, the president of IBM, Thomas Watson, was quoted as saying, “I think there is a world market for maybe five computers.” Somewhat more recently, in 1977, Ken Olsen, the founder of one-time computer industry giant Digital Equipment Corporation, stated, “There is no reason anyone would want a computer in their home.” Four years later the personal computer was introduced, ironically by IBM. It's about 80 years past the first quote and about 50 years past the second, but looking back at the history of computers, it's amazing to see how far the industry has come.
As recently as the early 1990s, portable computers were luxuries that were affordable to only the wealthy or the select few businesspeople who traveled extensively. As with all other technologies, though, portable systems became smaller, lighter (more portable), more powerful, and less expensive. Because the technology and price disparity between the two platforms has decreased significantly, laptops have outsold desktops since the mid-2000s. For about 20 years, laptop computers enjoyed a status as the most popular mobile computing device available. Desktop computers still were more powerful and cheaper, but with a laptop, users weren't tethered to a desk.
Technology continued its inevitable progression, and smaller mobile devices came into play. Tablets, smartphones, and even wearable devices obtained enough power and features to be considered computers. Laptop sales have been flat since about 2012, whereas smartphones and wearables have continued to increase in market share. Further, smartphones overtook laptops as the most popular device to access the Internet as of 2015. In stark contrast to the computers of yesteryear, today's most popular computing devices can be so small that the likelihood of misplacing them is a real and ongoing concern.
Every indication is that the movement toward mobile computing will continue, so you definitely need to be well versed in portable technologies, which contain both nifty features and frustrating quirks. In this chapter, you will learn about the features and quirks of laptop and mobile device hardware. There are some differences—for example, the majority of hardware on a laptop is more similar to that of a desktop than to that of a smartphone, but there are similarities as well. The display of a laptop (and to be fair, most desktop-sized monitors) is a lot like those in smaller devices. Throughout the chapter, we will cover laptops and smaller mobile devices as though they are similar, and call out specific differences where appropriate. We will start with installing and configuring laptop hardware and components, including details on display technologies. Then we'll finish with setting up and configuring accessories and ports on mobile devices.
Hardware in all computing devices needs to perform the same tasks. There are devices that control input and output, processing, short-term and long-term storage, displaying information, and connecting to other computers. This is true regardless of the size of the device. Granted, smaller devices are space-constrained, so the hardware components that do these tasks need to be smaller and consume far less energy.
Working with hardware, then, should be similar across devices, and for the most part it is. Maybe instead of having a physical keyboard you'll have a virtual one (and you can't exactly replace it, per se), but at least you know how to input to the device. Or maybe the screen is smaller, but it and the video card still perform the same tasks that their larger counterparts do. In the following sections, we will look at the specifics regarding working with hardware in laptops and smaller devices.
The first personal computers developed were more similar to today's desktop computers than they were to laptops or mobile devices. The smaller devices do trace their ancestry back to the desktop, though, so it makes sense to compare them to their bigger counterparts. Here, we'll take a high-level look at what makes laptops and mobile devices unique.
Laptops are similar to desktop computers in architecture in that they contain many parts that perform similar functions. However, the parts that make up a laptop are completely different from those in desktop computers. The obvious major difference is size; laptops are space challenged. Another primary concern is heat. Restricted space means less airflow, meaning parts can heat up and overheat faster.
To overcome space limitations, laptop parts are physically much smaller and lighter, and they must fit into the compact space of a laptop's case. It might not sound like much, but there really is a major difference between a 4-pound laptop and a 5-pound laptop if you're hauling it around in its carrying case all day. Also, laptop parts are designed to consume less power and to shut themselves off when not being used, although many desktops also have components such as video circuitry that go into a low-power state when not active. Finally, most laptop components are proprietary—the motherboard is especially proprietary, and the liquid crystal display (LCD) screen from one laptop will not necessarily fit on another.
Manufacturers have also pushed out smaller and smaller portables that are laptop adjacent. For example, in 2007 the first netbooks were introduced. A netbook is an extremely small laptop computer that is lighter in weight and more scaled down in features than a standard laptop. The term netbook is rarely used today, but Chromebooks are an example of that type of technology. Users are attracted to Chromebooks because of their enhanced portability and affordability. The features that remain are ideal for Internet access and emailing. However, many users would find them insufficient for mainstream business usage. Tablets are even smaller yet, but they are typically characterized as a mobile device, so we'll hold off on talking about them just yet.
If you've shopped for a laptop, you have no doubt noticed that the prices of desktop PCs are often quite a bit lower than those for laptop computers, yet the desktops are usually faster and more powerful. If you've ever wondered what makes a laptop so much different from a PC, here are the primary differences between laptops and desktops:
If you were asked to define the primary characteristic of mobile devices, you would probably answer, “They are small,” and you wouldn't be wrong. There are three overarching characteristics of mobile devices that make working with them unique versus working with laptops or desktops: field servicing and upgrading, input methods, and secondary storage. We'll discuss each one in turn.
Ever since the dawn of the portable computer, manufacturers and service providers have based a percentage of their success on warranties and “house calls” to repair devices on the fritz. It's a fact that quasi-permanent components, such as displays and motherboards, are widely considered replaceable only with identical components in laptops and smaller devices. However, technically minded users could take it upon themselves to expand the capabilities of their own system by, for instance, upgrading the hard drive, increasing RAM, using expansion cards and flash devices, and attaching wired peripherals.
Although the ability to repair and expand the functionality of portable devices in the field has become all but obviated, it has been shown with current and past generations of mobile devices that users are not averse to giving up expandability and replaceable parts as long as functionality and convenience outshine the loss.
Although many Android and other non-Apple devices allow the replacement of batteries and the use of removable memory cards as primary storage, even this basic level of access is removed in Apple's mobile devices, including its iPad line of tablet computers. In an effort to produce a sleeker mobile phone, even Android devices have been developed without user access to the battery. For Apple, however, in addition to producing a nice compact package, it is all part of keeping the technology as closed to adulteration as possible. Supporters of this practice recognize the resulting long-term quality. Detractors lament the lack of options.
To service closed mobile devices of any size, you may have to seek
out an authorized repair facility and take or send your device to them
for service. Attempting your own repairs can void any remaining
warranty, and it can possibly render the device unusable. For example, a
special screwdriver-like tool is required to open Apple's devices. You
cannot simply dig between the seams of the case to pop the device open.
Even if you get such a device to open, there is no standard consumer
pipeline for parts, whether for repair or upgrading. If you want to try
the repair yourself, you could be on your own. You may be able to find
helpful videos on YouTube or www.ifixit.com
to provide
some guidance, though.
Anyone who has been around the business for more than just a few years has likely seen their fair share of components and systems with no user-serviceable parts. For these situations, an authorized technician can be dispatched to your location, home or work, with the appropriate tools, parts, and skills to field-service the system for you. On a slightly different, perhaps subtler note, the bottom line here is that many of today's mobile devices, including some of the larger tablet-style devices, have no field-serviceable parts inside, let alone user-serviceable parts. In some extremes, special work environments similar to the original clean manufacturing environment have to be established for servicing.
With decreased size comes increased interaction difficulties. Human interfaces can become only so small without the use of projection or virtualization. In other words, a computer the size of a postage stamp is fine as long as it can project a full-sized keyboard and a 60" display, for example. Using microscopic real interfaces would not sell much product. Thus, the conundrum is that users want smaller devices, but they do not want to have to wear a jeweler's loupe or big-screen virtualization glasses to interact with their petite devices.
As long as the size of the devices remains within the realm of human visibility and interaction, modern technology allows for some pretty convenient methods of user input. Nearly all devices, from tablet size down, are equipped with touch screens, supplying onscreen keyboards and other virtual input interfaces. On top of that, more and more of the screens are developing the capability to detect more than one contact point.
Generically, this technology is referred to in the industry as multitouch, and it is available on all Apple devices with touch input, including the touchpads of the Apple laptops. Apple, through its acquisition of a company called FingerWorks, holds patents for the capacitive multitouch technology featured on its products. Today, multitouch is more about functionality than novelty. Nevertheless, the markets for both business and pleasure exist for multitouch.
Certainly, touch screens with the capability to sense hundreds of separate points of contact can allow large-scale collaboration or fun at parties. Imagine a coffee table that can allow you to pull out a jigsaw puzzle with the touch of an icon, remembering where you and three friends left off. Imagine all of you being able to manipulate pieces independently and simultaneously and being able to send the puzzle away again as quickly as you brought it out so that you can watch the game on the same surface. This technology exists, and it is for sale today. Early examples were built on Microsoft's PixelSense technology, including the Samsung SUR40. Companies like Ideum build multitouch platform tables, including a monster 86" 4K ultra-high-definition (UHD) display with 100 touch points, allowing up to eight people to use it simultaneously.
On a smaller scale, our mobile devices allow us to pinch and stretch images on the screen by placing multiple fingers on that screen at the same time. Even touchpads on laptops can be made to differentiate any number of fingers being used at the same time, each producing a different result, including pointing and clicking, scrolling and right-clicking, and dragging—all one-handed with no need to press a key or mouse button while gesturing.
HTC created an early touch screen software interface called TouchFLO that has matured into HTC Sense, and it is still in use today on its Android-based mobile devices. TouchFLO is not multitouch capable, nor does it specify the physical technology behind the touch screen, only the software application for it. Theoretically, then, TouchFLO and multitouch could be combined.
The primary contribution of TouchFLO was the introduction of an interface that the user perceives as multiple screens, each of which is accessible by an intuitive finger gesture on the screen to spin around to a subsequent page. On various devices using this concept, neighboring pages have been constructed side by side or above and below one another. Apple's mobile devices employ gestures owing to the contributions of TouchFLO, bringing the potential of combining TouchFLO-like technology and multitouch to bear.
Users of early HTC devices with resistive touch screen technology met with difficulty and discord when flowing to another screen. The matte texture of the early resistive screens was not conducive to smooth gesturing. Capacitive touch screen technology is a welcome addition to such a user interface, making gestures smooth and even more intuitive than ever.
Computers of all sizes and capabilities use similar forms of RAM for primary storage—the storage location for currently running instructions and data. Secondary storage—the usually nonvolatile location where these instructions and data are stored on a more permanent basis—is another story.
The primary concern with smaller devices is the shock they tend to take as the user makes their way through a typical day. Simply strapping a phone to your hip and taking the metro to work presents a multitude of opportunities for a spinning disk to meet with catastrophe. The result would be the frequent loss of user information from a device counted on more and more as technology advances.
Just as many telephony subscribers have migrated from a home landline that stays put to a mobile phone that follows them everywhere, many casual consumers are content to use their mobile device as their primary or only computing system, taking it wherever they go. As a result, the data must survive conditions more brutal than most laptops because laptops are most often shut down before being transported.
The most popular solution is to equip mobile devices with very small solid-state drives (SSDs) in place of larger magnetic or solid-state drives. There are no moving parts, the drive stays cooler and resists higher temperature extremes, and SSDs require less power to run than their conventional counterparts.
Now that we've illustrated the primary differences between laptops, mobile devices, and desktops, let's examine some principles for taking laptops apart and putting them back together.
Desktop computers often have a lot of empty space inside their cases. This lets air circulate and also gives the technician some room to maneuver when troubleshooting internal hardware. Space is at a premium in laptops, and rarely is any wasted. With a desktop computer, if you end up having an extra screw left over after putting it together, it might not be a big deal. With laptops, every screw matters, and you'll sometimes find yourself trying to identify visually miniscule differences between screws to make sure that you get them back into the right places.
Even though repairing a laptop poses unique issues, most of the general troubleshooting and safety tips that you use when troubleshooting a desktop still apply. For example, always make sure that you have a clean and well-lit workspace and be cautious of electrostatic discharge (ESD). General safety tips and ESD prevention are covered in Chapter 21, “Safety and Environmental Concerns.” For now, our general advice is to use antistatic mats or wrist straps if they're available.
One of the key principles for working with laptops is using the right tools to tear the thing apart. It's doubtful that any technician goes into a job thinking, “Hey, I'm going to use the wrong tools just to see what happens.” With laptops, though, it's especially important to ensure that you have exactly the tools you'll need for the job. The two main camps of materials you need are the manufacturer's documentation and the correct hand tools. We'll also emphasize the importance of organization.
Most technicians won't bat an eye at whipping out their cordless screwdriver and getting into a desktop's case. The biggest difference between most desktops is how you get inside the case. Once it's opened, everything inside is pretty standard fare.
Laptops are a different story. Even experienced technicians will tell you to not remove a single screw until you have the documentation handy unless you're incredibly familiar with that particular laptop. Most laptop manufacturers give you access to repair manuals on their websites. Table 9.1 lists the service and support websites for some of the top laptop manufacturers.
Company | URL |
---|---|
Apple | https://support.apple.com/mac |
Asus | https://www.asus.com/support |
Dell | https://www.dell.com/support |
HP | https://support.hp.com |
Lenovo | https://support.lenovo.com |
Sony | https://www.sony.com/electronics/support |
TABLE 9.1 Laptop manufacturers' service and support websites
If the site you need isn't listed, a quick Google search should do the trick. Once you are at the right website, search for the manual using the laptop's model number.
Once you have the manual in hand or on your screen, you need to gather the proper hand tools for the job. For some laptops, you only need the basics, such as small Phillips-head and flat-head screwdrivers. For others, you may need a Torx driver. Gather the tools you need and prepare to open the case. A small flashlight might also come in handy. Small PC technician toolkits are readily available online or from your favorite electronics retailer. They may have a few different sizes of screwdrivers, hex drivers, Torx drivers, tweezers, a screw grabber, and a few other assorted goodies, all in a convenient carrying case. An example is shown in Figure 9.1. Find one you like and never leave home without it.
FIGURE 9.1 PC technician toolkit
Before you crack open the case of your laptop, have an organization and documentation plan in place. Know where you are going to put the parts. Have a container set aside for the screws. You can purchase small plastic containers that have several compartments in them with lids that snap tightly shut, into which you can place screws. You can also use containers designed to organize prescription pills or fishing tackle. The bottom of an egg carton works well too, provided that you don't need to transport the screws from place to place. You don't want the screws falling out and getting lost!
For documentation, many technicians find it handy to draw a map of the computer they're getting into, such as the one shown in Figure 9.2. It can be as complex as you want it to be, as long as it makes sense to you. Taking pictures with your phone is also a smart move, provided that you're allowed to use your phone and don't violate any security or privacy policies.
FIGURE 9.2 Laptop repair road map
The drawing in Figure 9.2 shows the locations of the screws, and it also calls out where the screws should be placed once they're removed. Again, this type of documentation can be as simple or complex as you want it to be, as long as it makes sense and helps you stay organized.
Now that we've covered some key principles, let's take a look at specific components, technologies involved, and how to install and configure them.
In the following sections, you will learn about the various components that make up laptops and how they differ from desktop computer components. These sections deal specifically with laptops, because smaller devices generally don't have field-replaceable components (or you can get specialized training on how to repair them). If you don't remember exactly what each component does, it may help you to refer back to earlier hardware chapters occasionally as you read this chapter.
A typical laptop case is made up of three main parts:
Most cases are typically made of some type of plastic (usually ABS plastic or ABS composite) to provide for reduced weight as well as strength.
Laptop cases are made in what is known as a clamshell design. In a clamshell design, the laptop has two halves, hinged together at the back. The display portion, called the top half, often includes a webcam, microphone, and Wi-Fi antenna. All other components, including the motherboard, memory, storage, keyboard, battery, cooling fan, and speakers, are in the bottom half.
Occasionally, part of the laptop's case or the device's frame will crack and need to be replaced. However, you usually can't just replace the cracked section. Most often, you must remove every component from inside the laptop's case and swap the components over to the new one. This is a labor-intensive process because the screws in laptops are often very small and hard to reach.
Often, repairing a cracked case may cost several hundred dollars in labor alone. Most times, people who have cracked laptop cases wait until something else needs to be repaired before having the case fixed. Or, they just wait until it's time to upgrade to a new system. The decision on when to repair or replace the laptop boils down to a few factors. The primary one is if the user can live with the damage. While they can be annoying, most case problems don't inhibit the operation of the machine. The secondary factor is money. The user (or company) needs to decide if it's really worth spending the money needed to fix the issue immediately.
One of the components you may need to service is the speakers, which are generally built into the bottom part of the case. Exercise 9.1 will provide an example of removing speakers from a Dell Inspiron 13 7000 laptop. We'll use that model for an example throughout this chapter. We realize that this specific model is a few years old and you might not have access to one, but the exercises can help you understand the process. Besides, it would be impractical to list the specific replacement steps for all makes and models out there. If you have a different model, download its service manual and perform the necessary steps.
Before we get to Exercise 9.1, we want to remind you of a few safety steps to take before working on a laptop. We're not going to repeat these instructions before every exercise, but always perform these steps before beginning digging into a laptop (or a desktop, for that matter):
Removing the speakers from the Dell Inspiron 13 7000 was relatively easy. On some laptops, the speakers or speaker wire are completely buried underneath several other components, and you need to remove those first. As we move through this chapter, you will see situations where you need to remove several components before you can get to the one you want.
The display system is the primary component in the top half of the clamshell case. (The wireless antenna often resides here too, and we'll get to that in just a bit.) Much like all other laptop components, the display is more or less a smaller version of its desktop counterpart. What is unique to laptop displays, though, is that for some time, the technology used in them was actually more advanced than what was commonly used in desktops. This is due to LCD technology.
Before LCD technology, computer displays used cathode-ray tube (CRT) technology (like old-school televisions) and were big, bulky, and hardly mobile. We introduced LCD and organic light-emitting diode (OLED) concepts in Chapter 3, “Peripherals, Cables, and Connectors,” but we'll do a quick refresher in the “Screen” section later. Let's focus now on the different components that are required to make these types of displays work on a laptop.
The video card in a laptop or desktop with an LCD monitor does the same thing, regardless of what type of machine it's in. It's responsible for generating and managing the image sent to the screen. LCD monitors are digital, so laptop video cards generate a digital image. Laptop manufacturers put video cards that are compatible with the given display in a laptop, and most laptop manufacturers choose to integrate the LCD circuitry on the motherboard to save space.
LCD displays do not produce light, so to generate brightness, LCD displays have a backlight. A backlight is a small lamp placed behind, above, or to the side of an LCD display. The light from the lamp is diffused across the screen, producing brightness. The typical laptop display uses a cold cathode fluorescent lamp (CCFL) as its backlight. As their name implies, they are fluorescent lights, and they're generally about 8" long and slimmer than a pencil. You might see laptops claiming to have 2-CCFL, which just means that they have two backlights. This can result in a laptop with a brighter screen. CCFLs generate little heat, which is always a good thing to avoid with laptops.
Another backlight technology uses LEDs instead of CCFLs. Instead of CCFL tubes, they have strips of LED lights, and most LEDs do not need an inverter. Smaller devices, such as tablets and phones, almost exclusively use LED backlighting, which is smaller and consumes less power than CCFLs.
Fluorescent lighting, and LCD backlights in particular, require fairly high-voltage, high-frequency energy. Another component is needed to provide the right kind of energy, and that's the inverter.
The inverter is a small circuit board installed behind the LCD panel that takes DC current and inverts it to AC for the backlight. If you are having problems with flickering screens or dimness, it's more likely that the inverter is the problem, not the backlight itself.
There are two things to keep in mind if you are going to replace an inverter. First, they store and convert energy, which means they have the potential to discharge that energy. To an inexperienced technician, they can be dangerous. Second, make sure the replacement inverter was made to work with the LCD backlight that you have. If they weren't made for each other, you might have problems with a dim screen or poor display quality.
The screen on a laptop does what you might expect—it produces the image that you see. The overall quality of the picture depends a lot on the quality of the screen and the technology your laptop uses. Current popular options include variants of LCD and OLED. We introduced these technologies in Chapter 3, but here's a quick review:
Liquid Crystal Display First used with portable computers, LCDs are based on the electrical property that when a current is passed through a semi-crystalline liquid, the crystals align themselves with the current. Transistors are then combined with these liquid crystals to form patterns, such as numbers or letters. LCDs are lightweight and have low power requirements.
Liquid crystals do not produce light, so LCD monitors need a lighting source to display an image—the backlight that we already discussed. If you see a laptop advertised as having an LED display, it's an LCD monitor with LED backlighting.
There are three popular variants of LCD monitors in use today: in-plane switching (IPS), twisted nematic (TN), and vertical alignment (VA). All three employ LCD technology—that is, they use liquid crystals and transistors to form patterns. They do it in different ways, by aligning the crystals in a different manner.
Organic Light-Emitting Diode OLED displays are the image-producing parts of the display and the light source. An organic light-emitting compound forms the heart of the OLED, and it is placed between an anode and a cathode, which produce a current that runs through the electroluminescent compound, causing it to emit light. An OLED, then, is the combination of the compound and the electrodes on each side of it. The electrode in the back of the OLED cell is usually opaque, allowing a rich black display when the OLED cell is not lit. The front electrode should be transparent, to allow the emission of light from the OLED.
If thin-film electrodes and a flexible compound are used to produce the OLEDs, an OLED display can be made flexible, which is not only cool, but it allows it to function in places where other display technologies could never work.
Because OLEDs create the image in an OLED display and supply the light source, there is no need for a backlight, so power consumption is less than it is in LCD panels. Additionally, the contrast ratio of OLED displays exceeds that of LCD panels, meaning that in darker surroundings, OLED displays produce better images than LCD panels produce. Generally speaking, OLED monitors are the highest-quality monitors you will find on the market today. OLED is found in smaller devices such as smartphones as well.
A digitizer is a device that can be written or drawn on, and the content will be converted from analog input to digital images on the computer. Digitizers take input from a user's finger or a writing utensil, such as a stylus. When built into the display, they might be the glass of the display itself, or they might be implemented as an overlay for the display. For touch-screen devices, the digitizer might be the primary method of input. For other devices, such as a laptop with a touch screen, users might find the digitizer helpful for capturing drawings or handwritten notes.
Webcams are nearly universal on laptops today. The most common placement is right above the display on the laptop, although some are below the screen. Most laptops with webcams will also have a smaller circle to the left of the lens, which has a light that turns on to illuminate the user when the webcam is on. Some people are a bit paranoid about their webcams, and they will put a piece of tape over the camera when it's not in use.
If you're going to use a webcam to conduct a video call with someone, it helps if they can hear you too. That's where the microphone comes into play. Microphones are often built into the display unit as well. The webcam shown in Figure 9.7 has the illumination light to the left and the microphone inputs on both sides of the lens. Microphones can also be built into the bottom half of the clamshell, either above the keyboard or somewhere on the front bezel.
Practically all laptops produced today include built-in Wi-Fi capabilities. Considering how popular wireless networking is today, it only makes sense to include 802.11 functionality without needing to use an expansion card. With laptops that include built-in Wi-Fi, the Wi-Fi antenna is generally run through the upper half of the clamshell case. This is to get the antenna higher up and improve signal reception. The wiring will run down the side of the display and through the hinge of the laptop case and plug in somewhere on the motherboard.
FIGURE 9.7 Webcam and microphone
The Wi-Fi antenna won't affect what you see on the screen, but if you start digging around in the display, know that you'll likely be messing with your wireless capabilities as well.
While going through the exercises in this chapter, you've probably noticed that the inside of a laptop is tight and complex, and navigating it can be a challenge. Laptop displays are the same way. There are not as many components as compared to the bottom half of a laptop, but space is even tighter. Within a display, the components you might need to replace include the screen (or touch screen/digitizer, which some manufacturers call the display panel), Wi-Fi antenna, webcam, microphone, and inverter.
In most cases, servicing components within the display means removing the display assembly from the base assembly. Exercise 9.2 shows you how to do that.
If you're replacing the entire display unit, all you need to do is get the new unit and reverse the steps you followed in Exercise 9.2. If you're replacing a component within the display unit, you need to go further. To get to the Wi-Fi antenna, webcam, microphone, and inverter, you must also remove the display panel. Exercise 9.3 gives you the general steps needed to accomplish this.
As with desktop computers, the motherboard of a laptop is the backbone structure to which all internal components connect. However, with a laptop, almost all components must be integrated onto the motherboard, including onboard circuitry for the USB, video, expansion, and network ports of the laptop. With desktop systems, the option remains to not integrate such components. Because of the similarities between laptop and desktop components, some material in the next few sections will be familiar to you if you have read Chapter 1, “Motherboards, Processors, and Memory.”
The primary differences between a laptop motherboard and a desktop motherboard are the lack of standards and the much smaller form factor. As mentioned earlier, most motherboards are designed along with the laptop case so that all the components will fit inside. Therefore, the motherboard is nearly always proprietary, and that's what we mean by “lack of standards.” They use the technologies you're used to, such as USB and Wi-Fi, but it's very unlikely that you're going to be able to swap a motherboard from one laptop to another, even if both laptops are from the same manufacturer. Figure 9.11 shows an example of a laptop motherboard. Its unusual shape is designed to help it fit into a specific style of case with the other necessary components.
To save space, components of the video circuitry (and possibly other circuits as well) may be placed on a thin circuit board that connects directly to the motherboard. This circuit board is often known as a riser card or a daughterboard; an example is shown in Figure 9.12. We also labeled them in the Dell in Figure 9.4. They're harder to see when they're in the case with other components, but you may be able to tell that they have very different shapes from those in Figure 9.11 and Figure 9.12.
FIGURE 9.11 A laptop motherboard
FIGURE 9.12 A laptop daughterboard
Having components performing different functions (such as video, audio, and networking) integrated on the same board is a mixed bag. On one hand, it saves a lot of space. On the other hand, if one part goes bad, you have to replace the entire board, which is more expensive than just replacing one expansion card. Exercise 9.4 walks you through the steps to remove the motherboard from the Dell Inspiron 13 7000. As you'll see, you need to remove several components and disconnect several connectors before you can get the motherboard out.
Just as with desktop computers, the central processing unit (CPU) is the brain of the laptop computer. And just like everything else, compared to desktop hardware devices, laptop hardware devices are smaller and not quite as powerful. The spread between the speed of and number of cores in a laptop CPU and that of a desktop model can be significant. Fortunately, the gap has closed over the years, and laptop processors today are pretty fast—most people think they perform just fine. It's up to the user to determine if the difference in speed hurts their usage experience.
Laptops have less space than desktops, and therefore the CPU is usually soldered onto the motherboard and is not upgradable. You can see the processor of the Dell we've been working on in Figure 9.14—it's the small silver square to the left of the RAM. Within confined computing spaces, heat is a major concern. Add to that the fact that the processor is the hottest-running component, and you can see where cooling can be an issue. To help combat this heat problem, laptop processors are engineered with the following features:
While many portable computers will have processors that have just as many features as their desktop counterparts, others will simply use stripped-down versions of desktop processors. Although there's nothing wrong with this, it makes sense that components specifically designed for laptops fit the application better than components that have been retrofitted for laptop use. Consider an analogy to the automobile industry: it's better to design a convertible from the ground up than simply to cut the top off an existing coupe or sedan.
Laptops don't use standard desktop computer memory chips, because they're too big. In fact, for most of the history of laptops, there were no standard types of memory chips. If you wanted to add memory to your laptop, you had to order it from the laptop manufacturer. Of course, because you could get memory from only one supplier, you got the privilege of paying a premium over and above a similar-sized desktop memory chip.
Fortunately, the industry seems to have settled on one form factor, which is the small outline dual inline memory module (SODIMM), which we introduced in Chapter 1. Recall that they're much smaller than standard DIMMs, measuring 67.6 millimeters (2.6") long and 32 millimeters (1.25") tall. SODIMMs are available in a variety of configurations, including older 32-bit (72-pin and 100-pin) and 64-bit (144-pin SDRAM, 200-pin DDR/DDR2, 204-pin DDR3, 260-pin DDR4, and 262-pin DDR5) options. Different standards of DDR SODIMMs are only a few millimeters longer or shorter than other versions. You probably won't be able to tell the difference unless they are right next to each other, or unless you try to install them and they don't fit in the socket. (You should never have the latter problem, though, because you will check the documentation first!) Figure 9.15 shows a laptop DDR3 SODIMM under a desktop DDR2 DIMM for a size comparison.
FIGURE 9.15 Desktop DIMM and laptop SODIMM
Just as with desktop computers, make sure the SODIMM you want to put into the laptop is compatible with the motherboard. The same standards that apply to desktop memory compatibility apply to laptops. This means that you can find DDR, DDR2, DDR3, DDR4, and DDR5 SODIMMs for laptops. DDR has topped out at 1 GB per module, while DDR2 and DDR3 SODIMM modules can be purchased in sizes up to 8 GB and DDR4 up to 32 GB and DDR5 up to 64 GB (at the time this book was being written). Exercise 9.5 shows you how to replace SODIMMs in a laptop.
FIGURE 9.18 172-pin MicroDIMM
Storage is important for every computer made. If you can't retrieve important files when you need them, the computer isn't very useful. While the trend is moving toward storing more data online (in the cloud), there's still considerable need for built-in storage.
Laptops don't have the room for the full-sized 3.5" hard drives that desktop computers use. Smaller form factor drives at 2.5" or 1.8" that are less than ½" thick are more appropriate. These drives share the same controller technologies as desktop computers; however, they use smaller connectors. Figure 9.19 shows an example of a standard 3.5" hard drive compared to a 2.5" laptop hard drive.
FIGURE 9.19 A desktop hard drive (left) compared to a laptop hard drive (right)
To save space and heat, most laptops today use a solid-state drive (SSD), which we introduced in Chapter 2, “Expansion Cards, Storage Devices, and Power Supplies.” Recall that, unlike conventional magnetic hard drives, which use spinning platters, SSDs have no moving parts. They use the same solid-state memory technology found in the other forms of flash memory. Otherwise, they perform just like a traditional magnetic HDD, except they're a lot faster.
Connecting a regular SSD in a desktop is usually just like connecting a regular HDD; they have the same Parallel Advanced Technology Attachment/Serial Advanced Technology Attachment (PATA/SATA) and power connectors. Laptops often have a specialized connector and a single cable that handles both data and power, as shown in Figure 9.20. Most manufacturers also make them in the same physical dimensions as traditional hard drives, even though they could be made much smaller, like removable flash drives. (This is probably to preserve the “look” of a hard drive so as to not confuse consumers or technicians.)
FIGURE 9.20 2.5" SSD, motherboard connector, and cable
Newer SSDs may come in the even smaller M.2 form factor. In fact, the Dell Inspiron we've been working on in this chapter has an M.2 SSD. Figure 9.21 shows the SSD and the M.2 connector it plugs into. Exercise 9.6 walks you through removing an M.2 SSD from a laptop.
FIGURE 9.21 M.2 SSD and M.2 connector
Removing 2.5" or 1.8" SSDs from a laptop will require a few more steps than Exercise 9.6 did. First, disconnect the drive cable from the motherboard (refer to Figure 9.20), and then remove the two to four screws that hold the drive or its mounting bracket in place. Figure 9.23 shows an example of an SSD in a Lenovo ThinkPad that uses three screws—we've already removed them but highlighted the holes for you. Then lift out the drive (and bracket as needed). We show the disconnected drive next to the laptop in Figure 9.24.
FIGURE 9.23 2.5" SATA SSD
FIGURE 9.24 Disconnected SATA SSD
After replacing a hard drive or upgrading a laptop to a newer model, it may be necessary to get all of the user's data from the old drive (or laptop) onto the new one. Doing so is often referred to as data migration or hard drive migration. When setting up for a migration, there are two key questions to ask. The first is, what needs to be migrated? There's a big difference between moving someone's data and moving operating system settings and configurations. The second is, will the old drive be accessible when the new one is up and running? If so, there are more options to perform a migration. The answers to these will help determine which of the migration methods are preferable—either manual file copying or using migration software.
Copying Files Manually If all that's needed is to get user data from the old device to the new one, then manually copying files is usually an easy option. If the old drive will be inaccessible after the replacement, then files can be copied from the old drive to the cloud or to an external hard drive first. The drive replacement can be made, and then the files copied from the cloud or external drive to the new hard drive. If the old drive will still be accessible, then files can be copied across a network (if conditions permit) or copied from computer to computer with a transfer cable.
The downside to this method is that it doesn't transfer any settings or configurations (such as a user's desktop, color scheme, installed printers, etc.), and it normally doesn't work for apps. Apps will need to be reinstalled on the new drive, and the user will have to reconfigure their settings.
Using Migration Software Migration software can move files, settings, configurations, and apps from one drive to another. In most cases for this to work, both drives need to be accessible. For example, the old drive needs to be in a different operational computer, in a second expansion slot on the new computer, or otherwise connected, such as through a USB port using a USB-to-SATA adapter. If the connection is available, the migration software can work its magic.
There are several different migration apps available on the market.
For example, Laplink PCmover (http://web.laplink.com
)
works with Windows computers. It has several versions, including the
Home edition, which is good for normal users, and a Professional version
that can copy multiple user profiles. Macrium Reflect (http://macrium.com
) is
another option for Windows-based systems, and SuperDuper (http://shirt-pocket.com
)
works for macOS. All of these apps feature a graphical interface, where
you choose what you want to migrate and the software takes it from
there.
Nearly all laptops have a hard drive, but rarely does a laptop made today have an internal optical drive. There just isn't room for one. If you need one, you can attach an external optical drive via an expansion port such as USB. It might be a bit slower than an internal drive, but it's better than not having one at all.
Because of the small size of laptops, getting data into them presents unique challenges to designers. They must design a keyboard that fits within the case of the laptop. They must also design some sort of pointing device that can be used with graphical interfaces like Windows. The primary challenge in both cases is to design these peripherals so that they fit within the design constraints of the laptop (low power and small form factor) while remaining usable.
A standard-sized desktop keyboard wasn't designed to be portable. It wouldn't fit well with the portable nature of a laptop. That usually means laptop keys are not normal size; they must be smaller and packed together more tightly. People who learned to type on a typewriter or a full-sized keyboard often have a difficult time adjusting to a laptop keyboard.
Keyboards may need to be replaced if keys are missing or are stuck and won't function. If a user has spilled a beverage onto their keyboard, it could cause problems that necessitate a replacement. On most laptop keyboards, you can't replace just one key; the whole board must be replaced to fix a malfunctioning key.
Laptop keyboards are built into the lower portion of the clamshell. Sometimes, they can be removed easily to access peripherals below them, like memory and hard drives, as in the Lenovo ThinkPad series. Other times, removing the keyboard is one of the most challenging tasks, because nearly all other internal components need to be removed first. Exercise 9.7 illustrates the joy of removing the keyboard from the Dell Inspiron 13 7000.
In addition to using the keyboard, you must have a method of controlling the onscreen pointer in the Windows (or other graphical) interface. Most laptops today include multiple USB ports for connecting a mouse, and you can choose from a wide range of wired or wireless full-sized or smaller mice. There are several additional methods for managing the Windows pointer. Here are some of the more common ones:
Trackball Many early laptops used trackballs as pointing devices. A trackball is essentially the same as a mouse turned upside down. The onscreen pointer moves in the same direction and at the same speed that you move the trackball with your thumb or fingers.
Trackballs are cheap to produce. However, the primary problem with trackballs is that they do not last as long as other types of pointing devices; a trackball picks up dirt and oil from operators' fingers, and those substances clog the rollers on the trackball and prevent it from functioning properly.
Touchpad To overcome the problems of trackballs, a newer technology that has become known as the touchpad was developed. Touchpad is actually the trade name of a product. However, the trade name is now used to describe an entire genre of products that are similar in function.
A touchpad is a device that has a pad of touch-sensitive material. The user draws with their finger on the touchpad, and the onscreen pointer follows the finger motions. Included with the touchpad are two buttons for left- or right-clicking (although with some touchpads, you can perform the functions of the left-click by tapping on the touchpad, and Macs have one button). Figure 9.25 shows a touchpad.
FIGURE 9.25 Laptop touchpad
One problem people have with a touchpad is the location. You'll notice that the touchpad is conveniently placed right below the laptop keyboard, which happens to be where your palms rest when you type. Sometimes this will cause problems, because you can inadvertently cause your mouse cursor to do random things like jump across the screen. Most touchpads today have settings to allow you to control the sensitivity, and they will also differentiate between a palm touching them and a finger. In addition, if you have a sensitive touchpad that is giving you trouble, you can disable it altogether. Exercise 9.8 shows you how to do that in Windows 10. The specific steps to disable the touchpad will differ by manufacturer—you will almost always be able to disable it through the operating system, and some laptops have a function key to disable it as well. The steps in Exercise 9.8 were performed on a Lenovo ThinkPad laptop. Consult the laptop documentation if you are unable to locate the setting.
Although touchpads are primarily used with laptop computers, you can also buy external touchpads that connect to a computer just as you would connect a mouse.
FIGURE 9.28 Point stick
Touch Screen Touch screens are standard fare for smartphones and tablets and many laptop computers as well. We've already introduced the technology as a display device, so here we'll cover it as an input device. The idea is pretty simple: it looks like any other display device, but you can touch the screen and the system senses it. It can be as simple as registering a click, like a mouse, or it can be more advanced, such as capturing handwriting and saving it as a digital note. Although the technical details of how touch screens work are beyond the scope of this chapter, there are a few things to know:
User options to configure touch screens are often limited to calibrating the screen for touch input or pen input. If the touch screen seems to be sensing input incorrectly or not detecting input from the edges of the screen, it might be time to recalibrate it. In Windows, open Control Panel ➢ Tablet PC Settings, and then click Calibrate. Windows will ask the user to specify pen or touch input. Once that's specified, the user will be able to draw on the screen with their desired input tool, and the system will sense it. (In most cases, there will be a crosshairs on the screen and the user needs to touch where it is.) At the end, the user needs to save the calibration data for it to take effect.
Although laptop computers are less expandable than their desktop counterparts, many can be expanded to some extent. The two primary forms of internal expansion used in laptops today are Mini PCIe and M.2.
Since around 2005, Mini PCIe has been the most common slot for laptop expansion cards. We introduced PCIe in Chapter 2, and mini PCIe is just like the full version except the connectors are smaller. These cards reside inside the case of the laptop and are connected via a 52-pin card edge connector. Mini PCIe cards come in two sizes. The full-sized cards are 30 mm wide and 51 mm long. Half-sized cards (one is shown in Figure 9.29, with the connector at the bottom) are 30 mm wide and 27 mm long. Mini PCIe cards support USB and PCIe x1 functionality, and at the same speeds. Additionally, Mini PCIe cards have the 1.5V and 3.3V power options.
We also introduced M.2 in Chapter 2, so we won't go into a great amount of detail here. Figure 9.21 earlier in this chapter shows an M.2 hard drive and slot. For purposes of comparing it to Mini PCIe, know that it uses a narrower connector (22 mm vs. 30 mm) that has more pins (66-pin vs. 52-pin). M.2 supports USB 2.0 and newer. The slowest M.2 slots support PCIe x2 and M-keyed slots support PCIe x4, making it much faster than Mini PCIe. Most M.2 expansion cards focus on communications or storage. Common types of cards you will see in the market include the following:
Many laptops don't come with any free internal expansion slots, either M.2 or Mini PCIe. For example, the Dell we've been working on has two M.2 slots, but both are filled. One has the SSD, and the other is used by the Wi-Fi card. Be sure to check the documentation before you buy an expansion card for a laptop, to see what it supports.
FIGURE 9.29 Mini PCIe card in a laptop
Because portable computers have unique characteristics as a result of their portability, they have unique power systems as well. Portable computers can use either of two power sources: batteries or adapted power from an AC or DC source. Regardless of the source of their power, laptops utilize DC power to energize their internal components. Therefore, any AC power source needs to be rectified (converted) to DC. Most laptop display backlights, on the other hand, require high-voltage, low-amperage AC power. To avoid a separate external AC input, an inverter is used to convert the DC power that is supplied for the rest of the system to AC for the backlight. In case it's not obvious, converters and inverters perform opposite functions, more or less.
There are many different battery chemistries that come in various sizes and shapes. Nickel cadmium (NiCd), lithium-ion (Li-ion), and nickel-metal hydride (NiMH) have been the most popular chemistries for laptop batteries. A newer battery chemistry, lithium-polymer (Li-poly), has been gaining in prominence over recent years for smaller devices. Figure 9.30 is a photo of a removable Li-ion battery for an HP laptop.
FIGURE 9.30 A removable laptop Li-ion battery
A removable battery is very easy to replace in the event of a battery failure. However, most laptops today make use of an internal battery, such as the one shown in Figure 9.31. This particular battery is very thin—only 5 mm (1/4&c.dvlnab;) thick. Exercise 9.9 shows you how to remove an internal laptop battery.
Battery chemistries can be compared by energy density and power density. Energy density measures how much energy a battery can hold. Power density measures how quickly the stored energy can be accessed, focusing on access in bursts, not prolonged runtime. An analogy to the storage and distribution of liquids might help solidify these concepts. A gallon bucket has a higher “energy density” and “power density” than a pint bottle; the bucket holds more and can pour its contents more quickly. Another common metric for battery comparison is rate of self-discharge, or how fast an unused battery reduces its stored charge.
Most laptop computers can also use AC power with a special adapter (called an AC adapter) that converts AC power input to DC output. The adapter can be integrated into the laptop, but more often it's a separate “brick” with two cords: one that plugs into the back of the laptop and another that plugs into a wall outlet. Figure 9.32 is a photo of the latter.
FIGURE 9.32 A laptop AC adapter
Another power accessory that is often used is a DC adapter, which allows a user to plug the laptop into the round DC jack power source (usually called an auxiliary power outlet) inside a car or on an airplane. An example is shown in Figure 9.33. These adapters allow people who travel frequently to use their laptops while on the road.
FIGURE 9.33 A DC jack in a car
Use caution when selecting a replacement AC adapter for your laptop. You should choose one rated for the same or higher wattage than the original. You must also pay special attention to the polarity of the plug that interfaces with the laptop. If the laptop requires the positive lead to be the center conductor, for instance, then you must take care not to reverse the polarity. Look for symbols like the ones shown in Figure 9.34, and make sure the new power supply is the same as the old one.
FIGURE 9.34 Polarity symbols
Regarding the input voltage of the adapter, care must also be taken to match the adapter to the power grid of the surrounding region. Some adapters have a fixed AC input requirement. Purchasing the wrong unit can result in lack of functionality or damage to the laptop. Other adapters are autoswitching, meaning that they are able to switch the input voltage they expect automatically based on the voltage supplied by the wall outlet. These units are often labeled with voltage-input ranges, such as 100V to 240V, and frequency ranges, such as 50Hz to 60Hz, and are able to accommodate deployment in practically any country around the world. Nevertheless, you should still ascertain whether some sort of converter is required, even for autoswitching adapters.
There are a few internal components we've referenced in chapter exercises, but we haven't given explicit details on how to remove them. That's what this section is for. Note that these components aren't currently on the exam objectives, but it helps to know how to remove them anyway. The four components we'll examine are the fan, heat sink, wireless NIC, and CMOS battery. And since we'll be talking about CMOS, which is related to the BIOS/UEFI, we'll look at how to upgrade (or flash) the BIOS/UEFI as well. Exercise 9.10 shows you how to remove the system fan.
Now that the fan is removed, Exercise 9.11 shows you how to remove the CPU heat sink.
Exercise 9.12 shows you how to remove the wireless NIC. Perhaps ironically, it actually does have wires, but those are for the antenna that runs up into the display.
When you're reconnecting the wireless card, the white antenna cable will go on the main post, which is indicated by a white triangle. The black antenna cable attaches to the auxiliary connector, which is marked with a black triangle.
Exercise 9.13 shows how to remove the CMOS battery. In most laptops, the CMOS battery is covered by a black rubber coating.
Replacement batteries often come with a small amount of adhesive to secure the new battery in place.
Flashing the system BIOS/UEFI is usually a pretty straightforward process. You can download a BIOS/UEFI update from the manufacturer's website and then run the program. Exercise 9.14 shows sample steps for flashing the BIOS on a Dell laptop.
Mobile devices are mostly self-contained units, which aids in their portability. One downside to the portability is that mobile devices can easily be carried away by someone other than their rightful owner. And, if a user is doing work on a laptop in a public place, others could shoulder surf and see things they're not supposed to. The use of physical privacy and security components can help deter these unwanted behaviors.
One way that you can help to physically secure your laptop is through the use of a physical laptop lock, also known as a cable lock. Essentially, a cable lock anchors your device to a physical structure, making it nearly impossible for someone to walk off with it. Figure 9.40 shows a cable with a number combination lock. With others, small keys are used to unlock the lock. If you grew up using a bicycle lock, these will look really familiar.
FIGURE 9.40 A cable lock
Here's how it works. First, find a secure structure, such as the permanent metal supports of your workstation at work. Then, wrap the lock cord around the structure, putting the lock through the loop at the other end. Finally, secure the lock into your cable lock hole on the back or side of your laptop (Figure 9.41), and you're secure. If you forget your combination or lose your key, you're most likely going to have to cut through the cord, which will require a large cable cutter or a hack saw.
FIGURE 9.41 Cable lock insertion point
If someone wants your laptop bad enough, they can break the case and dislodge your lock. Having the lock in place will deter most people looking to make off with it, though.
Many mobile devices allow you to log in or unlock the screen through biometrics, or the use of a body part. For example, you can unlock your smartphone by enabling the facial recognition feature, or some older models have a fingerprint scanner for the same purpose. Other options for higher-end security systems include voice recognition and retinal scanning.
Laptops may have a fingerprint scanner built into the keyboard, such as the square one shown to the right of the Intel sticker in Figure 9.42. Others may be a rectangle or a circle. If a laptop doesn't have a biometric scanner and you want to add one, there are many USB options available.
FIGURE 9.42 Laptop fingerprint scanner
The use of biometrics can increase device security. Someone may be able to guess your password or see you type it in and can hack you that way. But fingerprints and other biometric features are unique. You may have seen movies where a super-secret spy ring replicates someone's fingerprint to gain access to a system, but in real life that type of thing is virtually unheard of.
To keep curious and prying eyes from viewing a laptop (or desktop) monitor, users can install a privacy screen over the front of their display. It's a thin sheet of semi-transparent plastic that reduces the viewing angle so that a display can be read only by someone directly in front of it. An example is shown in Figure 9.43.
FIGURE 9.43 Laptop privacy screen
Some laptops will have a built-in privacy screen setting that can be activated by the function (Fn) keys. For example, it would be Fn+F2 on the laptop keyboard shown in Figure 9.44. In our experience, the built-in privacy screens aren't quite as effective at cutting down the viewing angle as add-on privacy screens are, but they are better than nothing.
FIGURE 9.44 Fn+F2 enables the privacy screen.
In Chapter 7, “Wireless and SOHO Networks,” we introduced the concept of near-field communication (NFC). NFC is used extensively today for mobile payment systems because it's fast and convenient. Because NFC is a wireless communications method, the signals sent to and from NFC devices could be intercepted by a malicious third device. Of course, the maximum distance for NFC is about 10 centimeters (about 4"), so it would be pretty difficult for someone to intercept NFC data without the sender and receiver knowing it, but it's not impossible. The key thing to remember is, if using NFC for payment, be aware of your surroundings and keep an eye out for potentially suspicious-looking electronic devices.
Other than needing to plug in and charge for a while, laptops and mobile devices don't require anything to be plugged into them to provide a user with full functionality. With that said, there are a number of accessories that can be used to enhance functionality. These include touch pens, headsets, speakers, webcams, trackpads/drawing pads, docking stations, and port replicators. We'll examine each one of them after we take a quick look at accessory connection methods.
Smaller mobile devices such as smartphones and tablets generally don't have very many physical ports on them. One or two at the most is all you're going to get. That of course limits expandability unless you are connecting the accessory using wireless methods. Laptops often give you more physical connectors—you're likely to have at least a few USB ports to play with, along with an audio jack and the power port.
We've covered all the connection methods listed in the A+ exam objective 1.3 for exam 220-1101 in previous chapters with the exception of hotspot. Because of that, here we'll just do a quick refresh of the ones you need to know.
There are dozens of mobile accessories in the marketplace, including security devices, input/output tools, communication enablers, and mobile commerce endpoints. Here, we will cover a few input tools and communication enablers.
We could spend an entire chapter talking about input and output devices, given how many exist in the market. For purposes of the A+ exam, though, you only need to know about two: touch pens and trackpads/drawing pads.
A touch pen, also known as a stylus, is a pen-shaped accessory used to write with or as a pointer. Touch pens come in a variety of shapes and sizes, although most have either a narrow tip (like a pen) or a soft rubber ball-like tip, kind of like a pencil eraser. The idea is that a touch pen will act as an input device on a touch screen, enabling freeform writing, drawing, or clicking through answers to a questionnaire, such as at a medical office. Several touch pens are shown in Figure 9.45.
FIGURE 9.45 Several touch pens
Craig Spurrier, CC BY 2.5 https://creativecommons.org/licenses/by/2.5
,
via Wikimedia Commons
Another accessory designed for freeform input, often in conjunction with a touch pen, is a trackpad or drawing pad. You're probably familiar with the touchpad on a laptop, right underneath the keyboard. A trackpad or drawing pad is basically the same thing, only bigger and attached through a USB port. The purpose is to allow people to take notes or create drawings that are displayed on a computer screen and stored in a digital format. Some have buttons, like the one shown in Figure 9.46, to provide additional functionality such as on-pad menu management or erase and undo features.
FIGURE 9.46 Drawing pad accessory with stylus
Wacom:Pen-tablet_without_mouse.jpg: by Tobias Rütten, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0
,
via Wikimedia Commons
In the last few years, especially thanks to the global pandemic, more and more people have been working remotely. Even those who are still working from offices are affected by this trend, as more meetings become virtual. To make these meetings effective without distracting coworkers or roommates, headsets are a crucial accessory. And of course, if you need to be on camera, a webcam is needed too.
Headsets come in a variety of shapes and sizes. They will connect through the USB port or audio jack. Most laptops will detect the device as it's plugged in, configuring it automatically for you. If instead of a headset you want speakers, either to listen to music or a podcast or to let others hear audio from your system, those will also connect through one of the same two ports.
Nearly all laptops today come with a webcam, but sometimes they break or are of poor quality. In addition, some laptops come with webcams that are below the display as opposed to above it. They work fine, but when the person on video is typing, it looks like their fingers are huge and in your face. We're not fans. For any of those situations, an external webcam can be purchased and connected via USB.
Most laptops are designed to be desktop replacements. That is, they will replace a standard desktop computer for day-to-day use and are thus more fully featured than other laptops. These laptops often have a proprietary docking port. A docking port (as shown in Figure 9.47) is about 1" to 2.5" wide and is used to connect the laptop to a special laptop-only peripheral known as a port replicator, or a similar device called a docking station.
FIGURE 9.47 A docking port
A port replicator reproduces the functions of the ports on the back of a laptop so that peripherals that don't travel with the laptop—such as monitors, keyboards, and printers—can remain connected to the dock and don't all have to be unplugged physically each time the laptop is taken away. A docking station (shown in Figure 9.48) is similar to a port replicator but offers more functionality. Docking stations also replicate ports but can contain things like full-sized drive bays, expansion bus slots, optical drives, memory card slots, and ports that are not otherwise available on a laptop. For example, a laptop might have only two USB ports and an external HDMI port, but its docking station might have eight USB ports along with DVI and DisplayPort for external monitors as well.
FIGURE 9.48 The back and front of a docking station
In this chapter, you learned about laptop and mobile device hardware. We discussed differences between laptops, mobile devices, and desktops, including the various components that make up smaller devices and how they differ in appearance and function from those on a desktop. We also talked about principles for disassembling and reassembling smaller devices in order to not lose parts.
We broke the chapter into several sections to discuss different components. We started with the case, and then moved on to displays, motherboards and processors, memory, storage, input devices, internal expansion, batteries and power adapters, physical privacy and security, other internal components, and external peripherals. In each section, we covered specifics related to laptop versions of these components and included exercises to show you how to service them.
The chapter puts particular emphasis on the components of a display, including the type of display (LCD or OLED), the screen (which may be a touch screen or include a digitizer), Wi-Fi antenna placement, webcam, microphone, and inverter.
Finally, we ended the chapter by examining accessories and their connection methods. Connection methods included USB, Lightning, serial, NFC, Bluetooth, and hotspot. Accessories to remember include touch pens, trackpads/drawing pads, headsets, speakers, webcams, and docking stations and port replicators.
Understand how to install and configure laptop components. Components include the battery, keyboard, RAM, hard drives and solid-state drives, and wireless cards.
Know how to migrate data from an old hard drive to a new one. Options include manually copying files or using specialized migration software.
Know laptop physical privacy and security components. They include biometrics and near-field scanner features.
Understand the components that make up a display in a mobile device. Display components include the screen, which may be a touch screen or include a digitizer; Wi-Fi antenna; camera or webcam; microphone; and inverter.
Know the main types of mobile device displays. The two main types of mobile displays are liquid crystal display (LCD) and organic light-emitting diode (OLED). Within LCD, there are in-plane switching (IPS), twisted nematic (TN), and vertical alignment (VA).
Know how to connect mobile device accessories. Connection methods include USB, Lightning, serial, near-field communication (NFC), Bluetooth, and hotspot.
Be familiar with mobile device accessories. Ones to know include touch pens, trackpads/drawing pads, headsets, speakers, webcams, docking stations, and port replicators.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answer compares to the authors', refer to Appendix B.
The hard drive on a Dell Inspiron 13 7000 computer failed. You have an extra hard drive of the exact same type. What would you do to replace it?
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Devices that fit into the palm of your hand today are substantially more powerful than the bulky desktop systems of the 1990s, and those were exponentially more powerful than the room-sized supercomputers of the 1960s.This movement toward smaller devices creates new needs, specifically in the area of interaction with the devices—and that relates directly to the operating system and hardware. Remember that the OS is the interface between the hardware and the user, meaning that the OS needs to interpret user input and translate that into an action for the underlying hardware. We won't cover the details of mobile OSs in this chapter—that's taken care of in Chapter 13, “Operating System Basics.” For this chapter, you just need to remember that Android and iOS are the two dominant mobile operating systems in the market. We'll discuss connectivity and synchronization options for each.
In addition, small mobile devices just don't have room for traditional desktop or laptop hardware. Smartphones don't come with built-in keyboards and mice, nor do small devices have the storage capacity of their larger cousins. Yet with all of their physical space constraints, mobile devices are still expected to perform many of the same tasks as laptops or desktop computers, particularly network connectivity and data storage.
You already know that mobile devices can get on a network and store data. They just do it in a different way than larger systems do. In addition, because of their limited storage capacities, mobile devices can greatly benefit from the syncing of data to a larger storage system, such as a desktop, the cloud, or even an automobile. This chapter focuses on the details of mobile device connectivity, with a particular emphasis on email, as well as device synchronization.
Mobile devices like smartphones, tablets, and wearable devices basically exist for their convenience and ability to connect to a network. No one is using their small mobile devices to crunch spreadsheets, create detailed presentations, or perform other tasks that are better suited for larger PCs and Macs. On the other hand, mobile devices are great for surfing the Internet, messaging friends, taking pictures and sending them to friends and family, listening to music and watching videos, and even monitoring our health. Each of these tasks requires a network connection, either wired or wireless, at some point.
Of course, mobile devices are well known for their cellular connectivity, including the ability to access data services over the cellular network. Many subscribers pay a premium for data access, and going over your limit on a data plan can be a very expensive mistake (unless your plan offers unlimited data). To ease the expense involved in data-network access, nearly all manufacturers provide alternate access methods in their devices. For example, the service provider levies no additional expense for Wi-Fi or Bluetooth data access.
There are other cellular options on mobile devices with which you may be less familiar, though, such as hotspots and tethering and updating the firmware. In the following sections, we'll dive into those topics in some detail.
Connecting to a Wi-Fi network gives the mobile device access to resources on the network, such as printers, with the added bonuses of Internet access (assuming the wireless network has it), free texting, and perhaps Wi-Fi phone calls. Not only will this likely give the mobile user faster speeds than cellular, it also has the benefit of not counting against the data plan.
Bluetooth access to other devices is designed mainly for short-range communications, such as between the mobile device and a nearby computer with which it can exchange or synchronize data. Other applications of Bluetooth generally do not involve the exchange of data but rather the use of a nearby resource such as a printer to make a hard copy of a document or a headset to make hands-free phone calls.
As you can see, there is a distinct advantage to being able to connect your mobile device to a noncellular network. Mobile devices will use Wi-Fi when connected instead of cellular for data operations. Android devices will ask you during an important update if you would like to use Wi-Fi only or if cellular is okay to use. Figure 10.1, for example, shows the Google Play Store Settings screen on an Android device that clearly states that apps will be auto-updated over Wi-Fi only. (Other options are to not auto-update, or to update over any network, and data charges may apply.)
FIGURE 10.1 Google Play Store Settings screen
The following sections detail concepts relating to cellular networking and attaching to noncellular networks on iPhones and Android devices. After that, you will be introduced to the tasks required to establish email connectivity over these mobile units to corporate and ISP connections as well as integrated commercial providers.
Most of the networking technologies we cover in this book are relatively short range. Wi-Fi and Bluetooth signals can stretch a few hundred meters at best, and NFC is even more limited than that. Cellular is by far the longest-range of the wireless networking technologies available today.
We introduced fourth generation (4G) and fifth generation (5G) in Chapter 7, “Wireless and SOHO Networks.” We'll review those two here, but also introduce a few of their predecessors.
Two major cell standards dominated the industry for the formative years of the mobile phone evolution. The Global System for Mobile Communications (GSM) was the most popular, boasting over 1.5 billion users in 210 countries. The other standard was code-division multiple access (CDMA), which was developed by Qualcomm and available only in the United States.
Both were third generation (3G) mobile technologies, and each had its advantages. GSM was introduced first, and when CDMA was launched, it was much faster than GSM. GSM eventually caught up, though, and the two ended up with relatively similar data rates. The biggest issue was that GSM and CDMA were not compatible with each other. Whatever technology you got was based on the provider you signed up with. Sprint (which has since merged with T-Mobile) and Verizon used CDMA, and AT&T and T-Mobile used GSM. That meant that if you had a CMDA phone through Verizon, you couldn't switch (with that phone) to AT&T. And your CDMA phone wouldn't work outside the United States.
When 3G was first introduced in 1998, it specified a minimum data download rate of 200 Kbps. Data rates increased over time, and some carriers claimed to deliver over 7 Mbps downloads—although technically that was with 3.5G. Really, data rates varied by carrier, the equipment installed in the area, the number of users connected to the tower, and the distance from the tower.
In 2008 fourth generation (4G) came into the market. Again, we have already introduced 4G so we won't go into too much depth here, but simply review the key points:
The first fifth generation (5G) modem was announced in 2016, but it took until late 2018 for cellular providers to start test piloting 5G. At that point they had a substantial monetary investment in 4G and weren't yet sure of how to best implement the newer standard. As of this writing it's fairly widespread but not available everywhere yet. Because we've already covered 5G in Chapter 7, we'll just review the highlights here:
Having the ability to get gigabit performance over a cellular connection is exciting. Right now, it's hard to imagine how anyone could want or need more, but technology will continue to evolve. Ten or 20 years from now we'll look back and laugh at standards that only gave us a paltry gigabit per second and wonder who was able to survive on speeds that slow.
For now, though, gigabit through 5G is as good as it gets. Now that you're familiar with the standards, let's take a look at how to set up cellular data connections on mobile devices.
Nearly every user of a mobile device knows how to get on the Internet. iOS users have the built-in Safari browser, and Android users have Google Chrome available. Getting online when you have a cellular connection is easy to do if the device has a data plan. Another great feature of mobile devices is that they can share their cellular data connections with other devices. It's basically the exact opposite of joining a mobile phone to a Wi-Fi network, which we will talk about later in this chapter. Next, we will provide details on using and configuring hotspots and tethering, using airplane mode, and how data network updates are handled on mobile devices. We'll also cover a few key acronyms you need to know.
We talked about mobile hotspots in Chapter 9, “Laptop and Mobile Device Hardware,” and as an exam objective it's a topic that appears in a few places. Because it's an important concept for mobile device connectivity, we'll talk about it here as well.
Recall that a mobile hotspot lets you share your cellular Internet connection with Wi-Fi-capable devices. A Wi-Fi-enabled laptop, for example, would look for the mobile phone's Wi-Fi network, join it, and then have Internet access. To enable an iPhone to be a mobile hotspot, go to Settings ➢ Personal Hotspot. Figure 10.2 shows the personal hotspot screen. Simply slide the toggle to the On position to enable it. A password to join the network is provided (and can be changed) as well as instructions on how to join.
FIGURE 10.2 Personal hotspot screen in iOS
FIGURE 10.3 Setting up a personal hotspot
Also recall that there are three potential challenges with using a smartphone as a mobile hotspot: speed, cost, and security. Cellular connections are usually slower than Wi-Fi, so having multiple devices trying to get on the Internet via one cellular link can be slow. From a cost standpoint, you could go over your data plan quite easily by using a hotspot, or the provider might charge you extra just to use it. Finally, there is security. iOS 7 and newer use WPA2, and the iPhone 11 and newer support WPA3, so security is less of an issue, but wireless networks are inherently unsecure due to the fact that the signals are transmitted through the air.
On the Android OS, a mobile hotspot is also enabled under Settings ➢ Connections ➢ Mobile Hotspot And Tethering (see Figure 10.4). On Android, when you turn mobile hotspot on, it automatically turns Wi-Fi off.
FIGURE 10.4 Enabled mobile hotspot in Android
Tapping Mobile Hotspot will show the network name (SSID) and hotspot password (Figure 10.5), and tapping Configure takes you to the screen to change these options and additional settings (Figure 10.6). Figure 10.6 also shows the security choice—the only option available on this phone is WPA2 PSK.
Some mobile providers limit the number of devices that can join the mobile hotspot. For example, Verizon limits it to 10 devices for 4G LTE phones, and the Android phone used in the example allows a maximum of 15 devices.
FIGURE 10.5 Android hotspot network name and password
Finally, mobile providers sell small devices that are specifically used as mobile hotspots. Figure 10.7 shows an example of a Verizon Wireless MiFi hotspot. These types of devices will either use your existing mobile contract or will need to have an activation of their own.
Tethering is when you have connected a device to a mobile hotspot. The term used to be reserved only for when you connected via USB cable, as opposed to connecting via wireless. Some devices will not function as a mobile hotspot but will allow you to tether a laptop (or other device) to it so that the mobile device can share the cellular Internet connection.
FIGURE 10.6 Android hotspot configuration options
FIGURE 10.7 Verizon Wireless MiFi hotspot
Each type of wireless connection can be individually enabled or disabled under Settings. For example, if you look back at Figure 10.3, you can see that in iOS under Settings ➢ Cellular, you can toggle off Cellular Data. You can similarly turn off Wi-Fi under Settings ➢ Wi-Fi. Turning off one connection at a time serves its purpose—for example, disabling Bluetooth to save battery life—but there's also an option to disable all wireless connections at once. It's called airplane mode.
The airplane mode feature was named so because, for many years, no network signals were allowed on airplanes. Today, some airlines allow for in-flight Wi-Fi (for a nominal fee, of course), but the name of the feature still sticks. It's not restricted to airplane use, though. If you're in a public area and suspect that someone is trying to hack your phone through the Wi-Fi or Bluetooth connection, airplane mode will quickly shut down all your external connections. Android and iOS both make it easy to enable airplane mode and give you a few ways to get to it.
There are two quick ways to enable airplane mode in Android. The first is to swipe down from the top to open the notifications area. There you may see the quick settings icons, as shown in Figure 10.8. If not, swipe down again to open quick settings. The airplane mode icon looks like an airplane, conveniently enough. Tap it to turn it on or off.
FIGURE 10.8 Android airplane mode in quick settings
The second way is to open Settings ➢ Connections. Swipe the switch to the right to On to enable airplane mode (see Figure 10.9). Notice that an airplane icon also appears in the top-right corner next to the battery indicator. When you turn airplane mode back off, the wireless connections you previously had turned on will be enabled again.
iOS also provides access to airplane mode in two easy ways. One is to open Settings, and it's the first option (see Figure 10.10). When you slide it on, notice how all the other connections are turned off.
The other way is to access it from the Control Center. You can do this from both the lock screen and the Home screen. Simply swipe your finger down from the top bottom of the iPhone's touch screen, and you will get to the Control Center, similar to what's shown in Figure 10.11. Tap the airplane icon in the upper-left corner to enable airplane mode.
FIGURE 10.9 Android airplane mode in Settings
FIGURE 10.10 Airplane mode in iOS
FIGURE 10.11 Airplane mode in iPhone Control Center
When most people think of cellular updates, they probably think of an update to the operating system. Perhaps a new Android version is available or iTunes is alerting them to download the latest incarnation of iOS. Those updates are normal, and completing them takes the active participation of the user. Other updates can occur too, and many of these are transparent to the user.
Before we talk about what those updates are, though, you must first understand that mobile phones don't just have one operating system. This might come as a surprise, but most mobile phones have three operating systems. Duties are split up among the operating systems, simply because there are so many specialized tasks for the phone to perform.
The first OS is pretty obvious. The other two are specialized OSs that handle specific functions for the device. These two OSs are very small, typically only a few hundred kilobytes in size, and they are referred to as real-time operating systems (RTOSs). They are designed to be lightweight and fast, and “real-time” refers to their ability to minimize lag in data transfers.
First, there is a baseband OS that manages all wireless communication, which is handled by a separate processor. Some people call the wireless communications chips in a mobile phone the radio or the modem. Consequently, you might hear about radio firmware, a radio firmware update, or a modem update. The last two terms are interchangeable with baseband update, which simply means an update of the baseband OS.
Second, a subscriber identity module (SIM) OS manages all data transfers between the phone and the SIM chip, which is a small memory chip that stores user account information, phone identification, and security data, and it is generally tied to a specific carrier.
These RTOSs are normally updated when a user updates an operating system, but occasionally the carrier will update them when the phone is not otherwise busy. Apple currently provides no way to update either RTOS manually on iOS devices. (Users can find information on how to jailbreak the phone online, but that voids all warranties and is not recommended.) There's more information available on how to update an RTOS on Android phones because Android is open source. Users or companies can provide newer versions of the baseband RTOS, and others can download and install them. Some will say that updating your baseband firmware can result in better reception, faster data throughput, and reduced battery usage. There is much Internet debate about the rewards versus the risks, though.
Two other updates of which you should be aware are product
release instruction (PRI) updates and preferred roaming list
(PRL) updates. The PRI contains settings for configuration items on
the device that are specific to the network that it's on. The PRL is the
reference guide the phone uses to connect to the proper cell phone tower
when roaming. Both PRI updates and PRL updates also normally happen when
the primary OS on the phone is updated. Some carriers make these two
easier to update manually than the RTOS on Android phones, though. For
example, Verizon users can dial
*228
for a manual PRL
update. As always, check with the carrier before attempting to perform
these updates and to determine the exact procedure.
The last section introduced a few new acronyms to know, such as PRI and PRL. There are a few others you might see in relation to mobile phones that you should know too:
*#06#
. AT&T
and T-Mobile were the first networks to use IMEI.Within iOS, you can find many of these numbers by choosing Settings ➢ General ➢ About and scrolling to the bottom, as shown in Figure 10.12. To find the same information in Android, go to Settings ➢ About Phone ➢ Status Information (see Figure 10.13). Tap IMEI information to get the IMEI number.
FIGURE 10.12 iOS phone information
FIGURE 10.13 Android IMEI and other identifiers
Using a cellular network is great because you can connect from nearly anywhere. The downsides, though, are that the connection is slow compared to other connectivity methods, and you have to pay for the data you use. When within range of a secured Wi-Fi network, take advantage of the device's ability to use that network instead. Not only will the connection be faster, it will be free.
Before you can transfer data over a Wi-Fi network, you have to attach to the network in the same manner you would attach a laptop, for instance, to the same wireless network. You have to find the network by its service-set identifier (SSID), or you have to enter the SSID if it is not being broadcast. You must then satisfy any security requirements that might be in place, such as using WPA3 or having the right security keys. Exercise 10.1 steps you through the procedure on an iPhone.
A similar series of tasks is required when attaching an Android phone to the same type of network. Exercise 10.2 details that procedure.
When your phone is connected to a Wi-Fi network, you don't need to use a cellular connection for data transfers—apps will use the Wi-Fi connection for data. But if the connection gets dropped or you move out of Wi-Fi range, the device will use the cellular connection. This might be fine, but it also might not be what you want. If you want to ensure that the phone does not use cellular for data connections, you can disable that option. Exercise 10.3 walks you through the steps of how to do that on an iPhone. When the device is connected to a Wi-Fi network or when paired with a Bluetooth peer, data access will be possible; otherwise, no data-network access will occur.
Exercise 10.4 shows you how to disable cellular data on an Android-based device.
The final setting we will look at in relation to Wi-Fi networks is the virtual private network (VPN) configuration. A VPN is a secured network connection made over an unsecure network, such as the Internet. For example, if you wanted to connect your phone to your corporate network over the Internet in order to read email, but you also wanted to secure the connection, you could use a VPN. To set up a VPN on an iPhone, perform the following steps:
Select Settings ➢ General ➢ VPN. (Note that if this device has previously connected to a VPN, the VPN can be enabled under the main screen of Settings. Refer back to Figure 10.10 to see the toggle.)
You will see a screen similar to the one shown in Figure 10.25. You can see that there are four VPNs already configured on this device but that VPN is turned off.
FIGURE 10.25 VPN settings
FIGURE 10.26 Adding a VPN connection
Once you have enabled the VPN, a new VPN option will appear on your Settings page, as previously shown in Figure 10.25. This will allow you to easily enable, disable, or configure the VPN.
Exercise 10.5 shows you the steps required to set up a PPTP or L2TP VPN connection in Android.
Android also supports many apps that allow you to configure VPN
connections, such as TunnelBear (www.tunnelbear.com
)
owned by McAfee and Hola Free VPN (http://hola.org
).
The most secure VPN standard (as of this writing) is called OpenVPN. If your network uses an OpenVPN server, know that you have to install a third-party app (such as OpenVPN Connect) to create the VPN connection. Android does not natively support OpenVPN.
The IEEE 802.15 standard specifies wireless personal area networks (WPANs) that use Bluetooth for data-link transport. The concept is that certain paired devices will be capable of exchanging or synchronizing data over a Bluetooth connection, such as between a mobile device and a desktop or laptop computer.
In other cases, the Bluetooth pairing can be used simply to control one device with another, allowing information to flow bidirectionally, even if that transfer does not result in its permanent storage on the destination. Examples of this latter functionality include a Bluetooth headset for a smartphone, a Bluetooth-attached keyboard and mouse, and pairing a smartphone or MP3 player with a vehicle's sound system.
In general, connecting a mobile device to another device requires that both devices have Bluetooth enabled. Pairing subsequently requires that at least one of the devices be discoverable and the other perform a search for Bluetooth devices. Once the device performing the search finds the other device, a sometime-configurable pairing code must often be entered on the device that performed the search. The code must match the one configured on the device that was found in order for the pairing to occur. In some cases, this pairing will work in one direction only. Usually, it is the mobile device that should search for the other device. If both devices have the same basic capability and will be able to exchange data readily, then it's not as important which device performs the search. Regardless, the pairing code must be known for entry into the device that requests it.
The truth about pairing mobile devices with conventional computers is that the results are hit or miss. There's never any guarantee that a given pairing will be successful to the point of data transfer capability. Both devices must agree on the same Bluetooth specification. This turns out to be the easy part because devices negotiate during the connection. The part that is out of your control is what software services the manufacturer decided to include in their devices. If one device is not capable of file transfers over Bluetooth, then the pairing may go off without a hitch, but the communication process will stop there.
It sometimes takes a few tries to get the pairing or file transfer to work, so always be willing to try a few times. In the worst-case scenario, if it's still not working, look for documentation online to help. Exercise 10.6 shows the steps to connect an Android device to a Windows 10 laptop over Bluetooth and then to transfer a file back and forth between the two. This exercise is split into three sections so that you can concentrate on individual stages of the pairing and file sharing processes.
Exercise 10.7 steps through the process of pairing an iPhone with a vehicle in order to stream music to the vehicle's sound system. Note that the procedures shown in these exercises are based on the specific non-mobile devices used—a Windows 10–based HP laptop and a 2019 Honda. The procedure is roughly the same with other remote devices but will likely vary in the fine details.
If you plan on doing a lot of file transfers between a mobile device and a laptop or other paired device, it might make sense to get an app to make that job easier. The devices still need to be paired, but the app makes the transfer process easier. Bluetooth File Transfer is one example, and it's available in the Google Play Store. The iOS App Store has several options available as well if you search for Bluetooth file transfer.
The procedure in Exercise 10.7 is performed from the perspective of an iPhone pairing with a 2019 Honda vehicle. The exact process for Bluetooth pairing will differ based on your mobile OS and the device to which you are connecting. In general, though, remember that these are the steps:
Future connections to the iPhone from this vehicle should be automatic when the vehicle's Bluetooth mode is selected, and the iPhone should begin playing from the point where it last stopped playing over any output source. The specific initial and subsequent interactions between the vehicle and iPhone may vary from this description.
Mobile devices give users the ability to roam practically anywhere they want to and still be connected to the world. Whether this is a good or bad thing can be up for debate, but here we'll focus on the positive aspects of this freedom. One of the compelling features of mobile devices is to help you pinpoint where you are and help you get from where you are to where you want to be. This is accomplished through location services, which we will cover in this section.
Another positive of mobile devices is of course their size. Small devices are far more portable than bulky desktops or even laptops—you're unlikely to fit a laptop into your pocket. While mobile devices aren't great for editing spreadsheets or documents, they are more than adequate for sending and receiving email, managing calendars, and storing business and personal contacts. And for larger data storage needs, mobile devices easily connect to the cloud or desktop/laptop computers for synchronization. We'll cover all of these topics in this section on mobile app support as well.
Location services identify where you are and can help give you a route to where you want to be. Two different technologies combine to form what we know as location services, and they are GPS and cellular location services.
Global Positioning System (GPS) is a satellite-based navigation system that provides location and time services. It's great technology for those who are perpetually lost, want to know the best way to get somewhere, or want or need to track down someone else.
The most common commercial use for GPS is navigation; you can get your current location and directions to where you want to go. Other uses include tracking; law enforcement can monitor inmates with location devices, or parents can locate their children via their smartphones. Oil and gas companies use GPS in their geological surveys, and farmers can use GPS-enabled machines to plant crops automatically. There are three major components to GPS: the satellite constellation, the ground control network, and the receiver. The ground control network monitors satellite health and signal integrity. We'll look at the other two components next.
The U.S. Department of Defense (DoD) started developing GPS in the early 1970s, with the goal of creating the best navigation system possible. The first GPS satellite launched in 1978, and today the U.S. government manages 32 total GPS satellites covering the globe. Twenty-four are active satellites for the service, and the rest are backups. Satellites are launched into an orbit of about 12,550 miles above the earth, and old satellites are replaced with new ones when an old one reaches its life expectancy or fails. GPS is free to use for commercial purposes.
There are additional global satellite-based navigation systems managed by other government entities. Collectively, they are called Global Navigation Satellite Systems (GNSSs). All of the systems are outlined in Table 10.1; as you might expect, no two systems are compatible with each other.
Name | Managed by | Number of satellites |
---|---|---|
Global Positioning System (GPS) | United States | 24 |
Global Navigation Satellite System (GLONASS) | Russia | 24 |
Galileo Positioning System | European Space Agency | 30 |
BeiDou Navigation Satellite System (BDS) | China | 35 |
Indian Regional Navigation Satellite System (IRNSS) | India | 7 |
TABLE 10.1 Global Navigation Satellite Systems
At first glance, it might seem like there are an excessive number of satellites required to run a navigation service. GPS systems were designed to require multiple satellites. Receivers use a process called triangulation to calculate the distance between themselves and the satellites (based on the time it takes to receive a signal) to determine their location. They require input from four satellites to provide location and elevation or from three satellites to provide location. Most GNSSs provide two levels of service, one more precise than the other. For example, GPS provides the following two levels:
The two service levels are separated by transmitting on different frequencies, named L1 and L2. L1 transmits at 1,575.42 MHz, and it contains unencrypted civilian C/A code as well as military P code. L2 (1,227.60 MHz) only transmits encrypted P code, referred to as Y code. In the United States, SPS is free to use; the receiver just needs to manage C/A code. PPS requires special permission from the U.S. DoD as well as special equipment that can receive P and Y code and decrypt Y code. Galileo, in the European Union, provides free open (standard) service, but charges users a fee for the high data throughput commercial (premium) service. Both offer encrypted signals with controlled access for government use.
GPS receivers come in all shapes and sizes. Common forms are wearable watches and wristbands, stand-alone GPS devices (like the Garmin device shown in Figure 10.43), and ones built into automobiles. Most smartphones and tablets support GPS as well (Apple products use the name Location Services), and more and more laptops are coming with built-in GPS capabilities. You can also find GPS devices that come on a collar for pets. Most stand-alone GPS devices feature capacitive touch screens. The Garmin device shown in Figure 10.43 has a 4.5" touch screen; 5" to 7" devices are common as of this writing. It also contains an SD memory card slot for expansion. Popular brands of automobile GPS devices are Garmin, TomTom, and Magellan.
FIGURE 10.43 Garmin Nuvi GPS
Cellular location services is designed to do the same thing GPS does, such as provide a user's location or help navigate a route to a destination. While it uses triangulation just like GPS, the rest of the mechanics are different.
First, while commercial GPS services are free, cellular location services is not. It's provided via subscription from a mobile carrier such as Verizon, T-Mobile, AT&T, and others. Second, instead of using satellites, it uses cell phone towers for its triangulation points. This means that if a user doesn't have cell phone reception, then cellular location services won't work. Cellular location services is also less precise than GPS. Recall that GPS is accurate within 100 meters (although it's really often within 10 meters, as we discussed earlier), but cellular location services is only accurate within about 1,000 meters. If a user's phone is within range of multiple cell towers, that precision can increase substantially and get close to GPS performance. However, the rule of thumb is that GPS is more accurate than cellular when locating devices.
Setting precision aside, which technology do mobile devices use for location services? The answer is a combination of GPS and cellular, provided of course the device can find the satellites or cell towers. In fact, mobile devices can use Wi-Fi signals for location purposes as well, so it's really a combination of all three, depending on the circumstance.
Knowing how to turn location services on or off on a GPS receiver or a mobile device is a valuable skill. The specific settings depend on the operating system, of course, but we can provide general guidance here. Exercise 10.8 shows you how to configure Location Services in iOS 12.
In the Android OS, GPS is configured through Settings as well:
Move the slider to the on position, as shown, to enable GPS.
You can also configure things such as individual app permissions, and the ability to use Bluetooth and Wi-Fi to improve accuracy (by tapping Improve Accuracy), and location services for emergency purposes.
FIGURE 10.45 Android GPS settings
FIGURE 10.46 Location services app permissions
The use of mobile devices on corporate networks has increased exponentially over the past several years, and it will likely continue to increase over time. So many people work remotely or travel that it just makes sense for them to be able to manage email or perform tasks such as order placement and management from their mobile devices. Of course, these usage cases can give network administrators nightmares, as every additional device connected to the network represents another security risk. And when devices are small and easily misplaced or stolen, that elevates the security risk to a whole new level.
To help with mobile device security, many companies use a combination of mobile device management and mobile application management and can also implement two-factor authentication. We'll look at those in the upcoming sections, and deep dive into the most common corporate use of mobile devices, which is sending and receiving email.
Imagine that you are a network administrator for a corporate network, and the company implements a new policy where mobile devices should be granted network access. As we mentioned earlier, if done incorrectly this can pose a massive security risk to the company, so, no pressure, right? With security in mind, you may want to explore implementing a mobile device management (MDM) solution.
An MDM is a software package residing on a server. The key purpose of an MDM is to enroll mobile devices on the corporate network, and once those devices are enrolled, to manage security. This is done through security policies as well as the ability to remotely track, lock, unlock, encrypt, and wipe mobile devices as needed. Now if someone's smartphone is misplaced or stolen, an administrator can wipe it remotely and the security threat is mitigated.
Although this is a good solution for device-level security, there's a big piece missing—the software. That's where mobile application management (MAM) comes into play. Typically implemented in conjunction with an MDM, an MAM allows network administrators to remotely install, delete, encrypt, and wipe corporate applications and related data from mobile devices. In an MAM, administrators can specify software packages that are allowed to be installed on the mobile device and prohibit others that could pose security risks. When enrolled in the MAM, users may have a corporate app store that functions similarly to Apple's App Store or Google Play. Figure 10.47 shows an example of some apps in a corporate app store, managed by VMware AirWatch.
FIGURE 10.47 Corporate app store
We've talked about authentication several times in this book so far, including multifactor authentication in Chapter 8, “Network Services, Virtualization, and Cloud Computing.” Recall that single-factor authentication means a user needs just one piece of information beyond their username, typically a password. Multifactor means they need more than one, such as a password and an additional credential.
Two-factor authentication helps increase security for mobile devices by requiring that additional piece of information. A common implementation is to require a PIN from a security token, which changes every 30 seconds. We showed an example of one back in Figure 8.5. Another way to implement a security token is through a software package such as PingID (as shown in Figure 10.48). Here's a brief overview of how it works:
The second factor could also be something such as a one-time password generated by a security server, biometrics, or detection of location of a specific IP address.
Accessing email is the most common corporate use for mobile devices. Usually, the most difficult part is finding the server settings that are used only during establishment of the connection, which tends to occur only once for each device. The other big challenge is when users have the same devices and accounts for many years; since they have not configured them in some time, it might be difficult for them to remember their usernames and passwords, if needed.
When configuring mobile devices to access email, you will be attaching to one of the following two types of services:
Generally speaking, connecting to an integrated commercial provider is quite easy and nearly automatic. Connecting to a corporate or ISP-based account usually involves a few more steps, but it shouldn't be too tricky if you know the proper server settings. In the following sections, we'll look at configuring email on a mobile device and settings to know for manual email configuration.
If your email is on a common web-based service, such as iCloud,
Google/Inbox (Gmail), Exchange Online, or Yahoo, configuring the email
feature is pretty easy. Usually, your email address and password are all
that are required. However, if you have a corporate or ISP account or a
custom domain, even if it's hosted and accessible through Gmail, Outlook.com
, or the other
popular services, you may need to take a few more steps to make a
connection. For the purposes of trying out a commercial service, you can
always make a dummy account on the service's website and play around
with that if it helps you complete these exercises. Exercise
10.9 and Exercise 10.10 detail the basic steps
required to configure a commercial email account on an iPhone and on an
Android standard email client, respectively.
Exercise 10.10 details the steps required for configuring an email account on an Android standard email client. If the Android device does not have the email app on the home screen, you can add it or run it directly from the All Apps list.
In situations when you find that your email client cannot automatically configure your email account for you, there are often manual settings for the protocols required for sending and receiving emails. Table 10.2 details these protocols and their uses. These should look familiar to you if you recall Chapter 6.
Mail protocol | Description | Default port number |
---|---|---|
Simple Mail Transfer Protocol (SMTP) | Used to communicate between client and server and between servers to send mail to a recipient's account. The key word is send, as this is a push protocol. | TCP 25 |
Post Office Protocol (POP) | Used to communicate between a client and the client's mail server to retrieve mail with little interaction. | TCP 110 |
Internet Message Access Protocol (IMAP) | Used to communicate between a client and the client's mail server to retrieve mail with extensive interaction. | TCP 143 |
TABLE 10.2 TCP/IP mail protocols
In a TCP/IP network using only the protocols in Table 10.2 (there are other, less common options), you must always use SMTP for sending mail. You must decide between the use of POP and IMAP for interacting with the mail server to retrieve your mail with the client. When supported, IMAP is a clear choice because of its extensive interaction with the server, allowing the client to change the state or location of a mail item on the server without the need to download and delete it from the server.
Conversely, POP limits client interaction with the server to downloading and deleting items from the server, not allowing their state to be changed by the client. In fact, the use of POP as your receive-mail protocol can lead to confusion because copies of the same items appear in multiple client locations, some marked as read and others unread. Additionally, where IMAP changes the state of a mail item on the server and leaves the item on the server for later access by the same or a different client, POP settings must be configured not to delete the item from the server on download to each client. This, however, is what leads to the choice of multiple copies among the clients or only one client being able to download the item.
Most, if not all, Internet mail services require secure connections. SMTP, POP, and IMAP are all unsecure protocols, so this poses a problem. One solution is to use Secure Sockets Layer (SSL) or Transport Layer Security (TLS) on top of these protocols. You might recognize SSL and TLS from their use on TCP port 443 for the HTTPS protocol. When using these protocols to secure the email protocols, mail servers and clients need to communicate on ports other than the ones shown in Table 10.2. These are outlined in Table 10.3.
Mail protocol | TCP port number |
---|---|
SMTP with SSL | 465 |
SMTP with TLS | 587 |
IMAP with SSL/TLS | 993 |
POP with SSL/TLS | 995 |
TABLE 10.3 Secure mail ports
Additionally, you will need to know the server names for your service. Sometimes they are the same for inbound and outbound mail handling, but they may be different. Table 10.4 lists the servers in the United States for iCloud, Gmail, Exchange Online, and Yahoo Mail. Unless otherwise specified, the ports in Table 10.3 should be used for the protocols listed in Table 10.4.
Service | Direction and protocol | Server name |
---|---|---|
iCloud | Outbound on SMTP with SSL | smtp.mail.me.com |
Inbound on IMAP with SSL | imap.mail.me.com |
|
Google/Inbox (Gmail) | Outbound on SMTP with SSL or TLS | smtp.gmail.com |
Inbound on IMAP with SSL | imap.gmail.com |
|
Inbound on POP with SSL | pop.gmail.com |
|
Exchange Online | Outbound on SMTP with TLS | smtp.office365.com |
Inbound on IMAP with SSL | outlook.office365.com |
|
Inbound on POP with TLS | outlook.office365.com |
|
Yahoo Mail | Outbound on SMTP with SSL | smtp.mail.yahoo.com |
Inbound on IMAP with SSL | imap.mail.yahoo.com |
|
Inbound on POP with SSL | pop.mail.yahoo.com |
TABLE 10.4 Secure mail servers for common email services
While some users are content to have a mobile device only, many others do not consider their mobile devices to be islands unto themselves. Instead, they treat their mobile devices as extensions of their primary computing devices that, even if they happen to be portable, stay at work or at home while the mobile devices go on the road with the users. However, because many of the same changes to a user's calendar, contacts, and personal files can be made from the mobile device as easily as from the primary computer, frequent synchronization of the two devices is in order. Synchronization is the act of mirroring all unique changes and additions from one device to the other.
In most cases, there are multiple options as to how the mobile device will connect to the computer system it is syncing to. Some connections allow synchronization; others do not. Common connections include over USB, across Wi-Fi, over Bluetooth, and through a cellular connection. Although the wired connections tend to be the most reliable, the convenience of wireless connections and their automatic unattended synchronizations cannot be ignored. When syncing, you can either sync to a local computer (such as a laptop, desktop, or networked server) or to the cloud. Keep in mind that when you sync a mobile device to the cloud or are using certain synchronization utilities, you may run into data caps that only allow you to sync a certain amount of data.
Because each manufacturer of mobile devices must approach synchronization of data in the best manner for their devices, generalized discussions of data to be synchronized can only include the common types. Here's a list of the most common types of data to be synchronized by all such utilities:
In the following sections, we will look at how to synchronize using Microsoft utilities as well as how to sync iOS and Android devices.
Microsoft provides several software utilities that enable synchronization between devices. Two that we will focus on here, because they are included as exam objectives, are Microsoft 365 and ActiveSync.
Microsoft 365 is a subscription service that provides access to the Office suite of apps from Microsoft, including Word, Excel, PowerPoint, and others. With the subscription, users also get storage space in Microsoft's cloud. That cloud storage space can also be used to sync devices with one another.
Imagine a scenario where a user has a desktop and laptop computer, and they want to ensure the operating environment is always identical between the two. Windows 10 users with a Microsoft 365 account can easily sync settings between multiple computers with their subscription. Simply open Start ➢ Settings ➢ Accounts ➢ Sync Your Settings (Figure 10.57), and slide Sync Settings to On. The key things to note about using this are:
FIGURE 10.57 Sync Your Settings
If synchronization is no longer desired, the user can remove the settings from the cloud by taking the following steps:
http://account.microsoft.com/devices
.Syncing files and data between Windows-based computers is a bit more involved. It requires a Microsoft SharePoint server in addition to the Windows 365 subscription. Once the server and client are configured, synced files will be accessible through File Explorer. Typically, a user's Documents folder is set up to be synced, and whenever a file is modified, the updated version is saved to both systems. Various third-party synchronization software packages are also available that provide similar functionality.
ActiveSync is a protocol used by Microsoft Exchange Server that allows users to access email, calendar, contacts, and tasks from a mobile device such as a smartphone or a tablet. From the server side, ActiveSync also allows administrators to remotely wipe, enforce password policies, and enable encryption on mobile devices. To set up ActiveSync on a mobile device, the user needs to have a Microsoft Exchange account. Exercise 10.11 shows you how to enable ActiveSync on an iPhone.
Android devices can use ActiveSync as well. Go to Settings ➢ Accounts And Backup ➢ Manage Accounts and tap Add Account. On the next screen (Figure 10.62), choose Microsoft Exchange ActiveSync. On the following screen (Figure 10.63), enter the email address and password, and tap Sign In (or choose Manual Setup to set configuration options such as the type of security to use). Once connection is made to the server, sync options such as email, contacts, and calendar can be configured.
Due to their size, Apple iOS devices have limited storage space, and they also have the potential to get lost (or stolen) somewhat easily. Therefore, it's smart to synchronize your device to a desktop or laptop computer (they will be collectively referred to as desktop in the rest of this section) and/or make backups of the device. The differences between the two concepts aren't that large. Synchronization means that the exact same copy of the data (music, pictures, contacts, or whatever) is on both the iOS device and the desktop. Backing up means taking whatever is on the phone at that time and ensuring that a duplicate is stored elsewhere. Synchronization can often happen both ways, whereas backups are a one-way process. Apple provides two options for syncing and backing up: sync to a desktop (using iTunes) and back up to the cloud (using iCloud).
FIGURE 10.62 Adding ActiveSync in Android
To sync a device with a desktop, you must have the iTunes app
installed on your computer. It's installed by default on Macs and can be
found at https://www.apple.com/itunes
for non-Apple OSs. Figure 10.64 shows the Summary page of
iTunes for an iOS device when it's connected. Notice that the Backups
section has options to back up to iCloud or to This Computer. In this
section, we will focus on local backups.
By default, iOS devices will automatically sync each time they are connected by USB, and Wi-Fi in some cases, and they are recognized under the Devices section in the left frame of iTunes. The exception is when iTunes is set to prevent automatic synchronization. Figure 10.65 shows the dialog box from iTunes attained by clicking Edit ➢ Preferences ➢ Devices. (You might need to press the Alt key to get the Edit menu to appear.) Notice that syncing is set to occur automatically because the Prevent iPods, iPhones, And iPads From Syncing Automatically check box is cleared.
FIGURE 10.63 Enter email address and password
When synchronizing with a desktop, both the iOS device and the desktop authenticate each other. This two-way authentication, called mutual authentication, lets multiple services on the iOS device communicate with the appropriate services on the desktop.
The selection of what is to be synchronized is a task unto itself, but iTunes provides specific tabs on the left side of the interface for each class of data, as shown back in Figure 10.64, under the Settings section.
FIGURE 10.64 iTunes Summary page
FIGURE 10.65 Devices Preferences in iTunes
You can make very granular choices about what you want to sync. The following list gives the basic characteristics of each tab:
Below the Settings section is a section called On My Device that allows you to view what's currently stored on the device.
If the iOS device is running iOS version 5 or higher and the computer it syncs with is running iTunes version 10.5 or higher, you can sync your iOS device by using Wi-Fi. Besides these minimum version requirements, a few things have to come together before this will work. The following list outlines these requirements:
FIGURE 10.66 Enabling sync over Wi-Fi
You can tell when the device is syncing because the eject arrow to the right of the device name changes to the rotating sync icon, like the one in Figure 10.67. (Compare it to the eject icon to the right of the phone name back in Figure 10.64.) In Figure 10.67, you can also tell that this device is connected to the computer because the battery indicator is displayed. If it were syncing via Wi-Fi, the battery indicator would not be shown.
FIGURE 10.67 Device is syncing
If automatic synchronization is disabled in Devices Preferences, you can start the manual synchronization of an iOS device by selecting the iOS device above the left frame in iTunes and then clicking the Sync button at the bottom-right corner of its Summary tab.
So far, we've talked about syncing with the desktop, but we haven't mentioned storing data on the cloud. Apple's version of the cloud is called iCloud, and it is available to all iOS users. This is the only real option for users who have ditched their desktops, and a convenient option for those who haven't.
When a user creates an Apple ID, it's used to log into the iTunes store, but it can also be used for an iCloud account. Apple recommends that the same username be used for both, but it is not required. On the iOS device, it's easy to get to iCloud settings. Open the Settings app, tap the Apple ID at the top of the page, and then choose iCloud. The configuration page is shown in Figure 10.68.
FIGURE 10.68 iCloud configuration settings
At the top of the iCloud settings, it will show you the space available for that Apple ID and which types of data you are syncing or backing up. Simply slide the switch from off to on to turn on synchronization or backups. The default amount of free storage space is 5 GB. Tapping Manage Storage will give you an option to purchase more space if needed, such as 50 GB, 200 GB, or 2 TB, and manage backups. Synchronization and backups will happen when the phone is plugged into a power source, locked, and connected to Wi-Fi.
Just as with Apple's devices, mobile devices built for the Android operating system can be synced to a traditional computer. Apple's iTunes is proprietary and has been designated as the application that performs synchronization of iOS devices. In a similar way, manufacturers of Android devices have their own syncing utilities. Because this software and the connection methods allowed vary widely from one manufacturer to another, it is difficult to predict exactly what one manufacturer will offer in its utility and whether each Android device it produces will interact the same way and over the same connections.
Let's use a Samsung phone as an example. If you want to configure backups using Google Drive, tap Settings ➢ Accounts And Backup ➢ Back Up Data (it's under Google Drive). You will see a screen like the one shown in Figure 10.69. In this instance, backup is enabled and the device is automatically backed up. To run a manual backup, tap the Back Up Now button.
FIGURE 10.69 Google Drive backup
Common items available for synchronization include contacts, applications, email, pictures, music, videos, calendars, bookmarks, documents, location data, social media data, e-books, and passwords. Of course, specific options will depend on the software used.
This chapter introduced you to key features of mobile devices: network connectivity and synchronization. Establishing network connectivity means enabling cellular, Wi-Fi, or Bluetooth connections and configuring them properly.
Key cellular concepts to understand are hotspots and tethering, PRI/PRL/baseband, radio firmware, and IMEI/IMSI. When using Bluetooth, you need to pair the device with another Bluetooth device for connectivity.
Next, we covered mobile app support. A popular feature of mobile devices is location services, which can help locate you or your phone and give directions. We then moved into mobile device management and application management, including email configuration, two-factor authentication, and corporate applications.
Finally, we looked at synchronization. Synchronizing a mobile device to a desktop/laptop or the cloud is a good way to ensure that data is saved to a secure or permanent location. Types of data that are synchronized include contacts, applications, email, pictures, music, videos, calendars, bookmarks, documents, location data, social media data, e-books, and passwords. Common connection types for synchronization include Wi-Fi, USB, and cellular.
Understand the differences between wireless specifications. Know basic differences between 2G, 3G, 4G, and 5G, including what GSM and CDMA are and why they weren't compatible.
Know how to enable or disable wireless/cellular data connections. Wireless connections are individually disabled through the Settings app on most phones or through a quick access screen. Airplane mode disables all wireless signals on Android and all but Bluetooth on iOS.
Understand what PRL is and how to update it. The preferred roaming list (PRL) is the reference guide the phone uses to connect to the proper cell phone tower when roaming. It's updated when you update a mobile OS. Depending on the OS and carrier, it may be updated manually.
Understand the steps needed to configure Bluetooth. You need to enable Bluetooth, enable pairing, find a device for pairing, enter the appropriate PIN code (or confirm the PIN), and test connectivity.
Know the differences between GPS and cellular location services. GPS is a free service provided by the government and uses satellites. Cellular location services use cell phone towers and require an account with a carrier.
Know the purposes of MDM and MAM. Mobile device management (MDM) is primarily used to determine which mobile devices are allowed on a network and to set policies for access. MDM also provides mechanisms for remotely locking and wiping devices. Mobile application management (MAM) is for managing corporate applications on mobile devices.
Know which protocols are used for email and which ports they use. POP3 (port 110) and IMAP (port 143) are used to receive email, and SMTP (port 25) is used to send email. None of these protocols are inherently secure. They can be secured with SSL or TLS. SMTP over SSL uses port 465, and SMTP over TLS uses port 587. IMAP over SSL/TLS uses port 993, and POP over SSL/TLS uses port 995.
Know what two-factor authentication is. Two-factor authentication requires an additional piece of information beyond the username and password for access to be granted. Often this is a PIN generated by a security token, but it can also be a one-time password or biometrics.
Be familiar with four commercial email providers and required configuration items. Common commercial email providers are iCloud, Google/Inbox, Exchange Online, and Yahoo Mail. Each provider has its own inbound and outbound servers, but most of the time that configuration information is automatically provided when you try to connect to them with an email client.
Know which types of data are often synchronized. Common data types for synchronization include contacts, applications, email, pictures, music, videos, calendars, bookmarks, documents, location data, social media data, e-books, and passwords.
Understand the differences between two Microsoft synchronization utilities. Microsoft 365 can sync Windows settings between two Windows devices. It can also sync files, but a SharePoint server is also required. ActiveSync is used by Exchange Server to sync email, contacts, calendars, and notes between a mobile device and an Exchange email server.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
Explain how to establish Wi-Fi connectivity on an Apple iPhone.
THE FOLLOWING COMPTIA A+ 220-1101 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Mentioning the words troubleshooting theory to many technicians can cause their eyes to roll back in their heads. It doesn't sound glamorous or sexy, and a lot of techs believe that the only way to solve a problem is just to dive right in and start working on it. Theories are for academics. In a way, they're right—you do need to dive in to solve problems because they don't just solve themselves. But to be successful at troubleshooting, you must take a systematic approach.
You may hear people say, “Troubleshooting is as much of an art as it is a science,” and our personal favorite, “You just need to get more experience to be good at it.” While there is an art to fixing problems, you can't ignore science. And if you need experience to be any good, why are some less experienced folks incredibly good at solving problems while their more seasoned counterparts seem to take forever to fix anything? More experience is good, but it's not a prerequisite to being a good troubleshooter. Again, it's all about applying a systematic approach.
There's one more detail to understand before getting into the details of specific problems: in order to troubleshoot anything, you need to have a base level of knowledge. For example, if you've never opened the hood of a car, it will be a bit challenging for you to figure out why your car won't start in the morning. If you're not a medical professional, you might not know why that body part hurts or how to make it feel better. In the same vein, if you don't know how data is stored and accessed on a computer, it's unlikely that you'll be able to fix related computer problems. So before you get too heavy into troubleshooting, make sure you understand how the systems on which you are working are supposed to function in the first place.
Because this chapter comes after the hardware and networking chapters, we're going to assume that you've read them already. Therefore, we're not going to get into a lot of detail about how things work—it's assumed that you know those details by now. (If you're still not certain, this book is a great reference manual!) Instead, we'll talk more about what happens when things don't work the way they're supposed to: what signs to look for and what to do to fix the problem.
After discussing the theory, this chapter covers troubleshooting core hardware components, including motherboards, RAM, CPUs, and power. The remainder of the Hardware and Network Troubleshooting objective is covered in Chapter 12, “Hardware and Network Troubleshooting.”
When troubleshooting, you should assess every problem systematically and try to isolate the root cause. Yes, there is a lot of art to troubleshooting, and experience plays a part too. But regardless of how “artful” or experienced you are, haphazard troubleshooting is doomed to fail. Conversely, even technicians with limited experience can be effective troubleshooters if they stick to the principles. The major key is to start with the issue and whittle away at it until you can narrow it down and pinpoint the problem. This often means eliminating, or verifying, the obvious.
Although everyone approaches troubleshooting from a different perspective, a few things should remain constant:
In the next few sections, we'll take you through each step of the troubleshooting process.
There's a famous quote attributed to Albert Einstein that states, “If I had an hour to solve a problem, I'd spend 55 minutes on the problem and 5 minutes on the solution.” Whether or not he said that may be debatable, but the premise behind the quote is spot on because while this may seem obvious, it can't be overlooked: if you can't define the problem, you can't begin to solve it. This is true not only with computers but with every facet of life!
Sometimes, problems are relatively straightforward, but other times they're just a symptom of a bigger issue. For example, if a user isn't able to connect to the Internet from their computer, it could indeed be an issue with their system. But if other users are having similar problems, then the first user's difficulties might just be one example of the real problem.
Problems in computer systems generally occur in one (or more) of four areas, each of which is in turn made up of many pieces:
Many times, you can define the problem by asking questions of the user. One of the keys to working with your users or customers is to ensure, much like a medical professional, that you have a good bedside manner. Most people are not as technically savvy as you, and when something goes wrong, they become confused or even fearful that they'll take the blame. Assure them that you're just trying to fix the problem but that they can probably help because they know what went on before you got there. It's important to instill trust with your customer—believe what they are saying, but also believe that they might not tell you everything right away. It's not that they're necessarily holding back information; they just might not know what's important to tell.
Help clarify things by having the customer show you what the problem is. The best method we've seen of doing this is to say, “Show me what ‘not working' looks like.” That way, you see the conditions and methods under which the problem occurs. The problem may be a simple matter of an improper method. The user may be performing an operation incorrectly or performing the operation in the wrong order. During this step, you have the opportunity to observe how the problem occurs, so pay attention.
Here are a few questions to ask the user to aid in determining the problem:
Be careful of how you ask questions so that you don't appear accusatory. You can't assume that the user did something to mess up the computer. Then again, you also can't assume that they don't know anything about why it's not working.
Although it's sometimes frustrating dealing with end users and computer problems, such as the user who calls you up and gives you the “My computer's not working” line (okay, and what exactly is that supposed to mean?), even more frustrating is when no one was around to see what happened. In cases like this, do your best to find out where the problem is by establishing what works and what does not.
Let's say that you get to a computer and the power light is on and you can hear the power supply fan whirring but there is no video and the system seems to be unresponsive. At least you know that the system has power, and you can start investigating where things start to break down. (We sense a reboot in your future!)
The whole key to this step is to identify, as specifically as possible, what the problem is. The more specific you can be in identifying what's not working, the easier it will be for you to understand why it's not working and how to fix it. If you have users available who were there when the computer stopped working, you can try to gather information from them. If not, you're on your own to gather clues. It's like CSI but not as gory.
So now instead of having users to question, you need to use your own investigative services to determine what's wrong. The questions you would have otherwise asked the user are still a good starting point. Does anything appear amiss or seem to have been changed recently? What is working and what is not? Was there a storm recently? Can I reboot? If I reboot, does the problem seem to go away? Is there any information in system or application logs that provides clues?
The key is to find out everything that you can that might be related to the problem. Document exactly what works and what doesn't and, if you can, why. If the power is out in the house, as in the story related earlier, then there's no sense in trying the power cord in another outlet.
This is important because it determines the part of the computer on which you should focus your troubleshooting skills. Each part requires different skills and different tools.
To determine whether a problem is hardware- or software-related, you can do a few things to narrow down the issue. For instance, does the problem manifest itself when the user uses a particular piece of hardware (an external optical drive or a USB hard drive, for example)? If it does, the problem is more than likely hardware-related.
Determining if the issue is hardware- or software-related relies on personal experience more than any of the other troubleshooting steps. Without a doubt, you'll run into strange software problems. Each one has a particular solution. Some may even require reinstallation of an application or the operating system. If that doesn't work, you may need to resort to restoring the entire system (operating system, applications, and data) from a data backup done when the computer was working properly.
Hardware problems are usually pretty easy to figure out. Let's say that the sound card doesn't work. You've tried new speakers that you know do work, and you've reinstalled the driver. All the settings look right, but the sound card just won't respond. The sound card is probably the piece of hardware that needs to be replaced.
With many newer computers, several components such as sound, video, and networking cards are integrated into the motherboard. If you troubleshoot the computer and find a hardware component to be bad, there's a good chance that the bad component is integrated into the motherboard and the whole motherboard must be replaced—an expensive proposition, to be sure.
In your middle school or junior high school years, you probably learned about the scientific method. In a nutshell, scientists develop a hypothesis, test it, and then figure out if their hypothesis is still valid. Troubleshooting involves much the same process.
Once you have determined what the problem is, you need to develop a theory as to why it is happening. First question the obvious. Forgetting to check the obvious things can result in a long and unnecessary troubleshooting process. No video? It could be something to do with the monitor or the video card. Can't get to your favorite website? Is it that site? Is it your network card, the cable, your IP address, DNS server settings, or something else? Once you have defined the problem, establishing a theory about the cause of the problem—what is wrong—helps you develop possible solutions to the problem.
Theories can state either what can be true or what can't be true. However you choose to approach your theory generation, it's usually helpful to take a mental inventory to see what is possible and what is not. Start eliminating possibilities, and eventually the only thing that's left is what's wrong. This type of approach works well when it's an ambiguous problem; start broad and narrow your scope. For example, if data on the hard drive is inaccessible, there is likely one of three culprits: the drive itself, the cable it's on (if applicable), or the connector on the motherboard. Try plugging the drive into another connector or using a different cable. Narrow down the options.
Once you have isolated the problem, slowly rebuild the system to see if the problem comes back (or goes away). This helps you identify what is really causing the problem and determine if there are other factors affecting the situation. For example, we have seen memory problems that are fixed by moving the memory modules from one slot to another.
Sometimes, you can figure out what's not working, but you have no idea why or what you can do to fix it. That's okay. In situations like these, it may be best to fall back on an old trick called reading the manual. As they say, “When all else fails, read the instructions.” The service manuals are your instructions for troubleshooting and service information. Virtually every computer and peripheral made today has service documentation on the company's website. Don't be afraid to use it!
If you're fortunate enough to have experienced, knowledgeable, and friendly coworkers, be open to asking for help if you get stuck on a problem. Trading knowledge between coworkers not only builds the skill level of the team, but can also build camaraderie.
You've eliminated possibilities and developed a theory as to what the problem is. Your theory may be specific, such as “the power cable is fried,” or it may be a bit more general, like “we can't access the hard drive” or “there's a connectivity problem.” No matter your theory, now is the time to start testing it. Again, if you're not sure where to begin to find a solution, the manufacturer's website is a good place to start!
This step is the one that even experienced technicians overlook. Often, computer problems are the result of something simple. Technicians overlook these problems because they're so simple that the technicians assume they couldn't be the problem. Here are some simple questions to ask:
If you suspect user error, tread carefully in regard to your line of questioning, to avoid making the user feel defensive. User errors provide an opportunity to teach the users the right way to do things. Again, what you say matters. Offer a “different” or “another” way of doing things instead of the “right” way.
After you test the theory to determine the cause, one of two things will happen. Either the theory will be confirmed or it won't be. Said differently, you were right or wrong. There's nothing wrong with having an initial theory turn out to be incorrect—it just means going back to the drawing board and looking for another explanation. More explicitly, here's what to do next:
As we just said, it's okay to be wrong with your first guess on the cause of the problem. We can't count the number of times we've said, “Huh, maybe that wasn't the problem—weird,” or some variation thereof. It happens—some problems are very complicated. Focus on isolating the issue to narrow down the possible culprits, and double-check manuals or online resources if needed.
If you've tried everything you can think of, or perhaps are in a tense situation that's time-sensitive and you feel in over your head, don't be afraid to escalate the problem. Asking for help can be hard, because sometimes it feels like you've failed in some way. Don't feel bad about it—sometimes getting a second opinion makes all the difference.
If your theory was right and the fix worked, then you're brilliant! If not, you need to look for the next option. After testing the theory, establish a plan of action to resolve the problem and implement the solution. This may take one of the following three paths:
If the solution worked, and there are no other affected computers, verify full system functionality. We'll discuss that after we talk about what to do if the first fix didn't work, or you need to apply the fix to multiple systems.
So you tried the hard drive with a new (verified) cable and it still doesn't work. Now what? Your sound card won't play and you've just removed and reinstalled the driver. Next steps? Move on and try the next logical thing in line.
When evaluating your results and looking for that golden “next step,” don't forget about other resources that you might have available. Use the Internet to look at the manufacturer's website. The vendor's instructions could prove invaluable. Read the manual. Talk to your friend who knows everything about obscure hardware (or arcane versions of Windows). When fixing problems, two heads can be better than one.
If the problem was isolated to one computer, this step doesn't apply. But some problems that you deal with may affect an entire group of computers. For example, perhaps some configuration information was entered incorrectly into the DHCP server, giving everyone the wrong DNS server address. The DHCP server is now fixed, but all the clients need to renew their IP addresses. Or, maybe a software update that was pushed to all client computers messed up a configuration, and you happened to be first on the scene. Now it's time to resolve it for all computers that are affected.
After fixing the system, or all the systems affected by the problem, go back and verify full functionality. For example, if the users couldn't get to any network resources, check to make sure they can get to the Internet as well as to internal resources.
Some solutions may accidentally cause another problem on the system. For example, if you update software or drivers, you may inadvertently cause another application to have problems. There's obviously no way that you can or should test all applications on a computer after applying a fix, but know that these types of problems can occur. Just make sure that what you've fixed works and that there aren't any obvious signs of something else not working all of a sudden.
Another important thing to do at this time is to implement preventive measures, if possible. If it was a user error, ensure that the user understands ways to accomplish the task that won't cause the error to recur. If a cable melted because it was too close to a space heater under someone's desk, resolve the issue. If the computer overheated because an inch of dust was clogging the fan…you get the idea.
A lot of people can fix problems. But can you remember what you did when you fixed a problem a month ago? Maybe. Can one of your coworkers remember something you did to fix the same problem on that machine a month ago? Unlikely. Always document your work so that you or someone else can learn from the experience. Good documentation of past troubleshooting can save hours of stress in the future.
Documentation can take a few different forms, but the two most common are personal and system-based.
We always recommend that technicians carry a personal notebook and take notes during the troubleshooting process. Some problems are long and complex, and it may be hard to remember which setting you changed 15 steps ago. The type of notebook doesn't matter—it can be digital or paper, big or small—use whatever works best for you. Take pictures with your phone if it will be helpful (if it's allowed). The notebook can be a lifesaver, especially when you're new to a job. Write down the problem, what you tried, and the solution. The next time you run across the same or a similar problem, you'll have a better idea of what to try. Eventually, you'll find yourself less and less reliant on your notebook, but it's incredibly handy to have.
System-based documentation is useful to both you and your coworkers. Many facilities have server logs of one type or another, conveniently located close to the machine. If someone makes a fix or a change, it gets noted in the log. If there's a problem, it's noted in the log. It's critical to have a log for a couple of reasons:
We've seen several different forms of system-based documentation. Again, the type of log doesn't matter as long as you use it. Often, it's a notebook or a binder next to the system or on a nearby shelf. If you have a rack, you can mount something on the side to hold a binder or notebook. For desktop computers, one way is to tape an index card to the top or side of the power supply (don't cover any vents), so if a tech has to go inside the case, they can see if anyone else has been in there fixing something too. Companywide electronic knowledge bases or incident repositories are also commonly used. It is just as important to contribute to these systems as to use them to help diagnose problems.
To many who are not familiar with computers, that whirring, humming box sitting on or under their desk is an enigma. They know what shows up on the screen, where the power button is, where to put a thumb drive, and what not to spill on their keyboard, but the insides are shrouded in mystery.
Fortunately for them, we're around. We can tell the difference between a hard drive and a motherboard and have a pretty good idea of what each part inside that box is supposed to do. When the computer doesn't work like it's supposed to, we can whip out our trusty screwdriver, crack the case, and perform surgery. And most of the time, we can get the system running just as good as new.
In the following sections, we're going to focus our troubleshooting efforts on the three key hardware components inside the case: motherboards, RAM, and CPUs, as well as power issues. These four components are absolutely critical to a computer system. Without a network card, you won't be able to surf the web. Without a processor, well, you won't be able to surf the web—or do much of anything else for that matter. So it makes sense to get started with motherboards, CPUs, RAM, and power.
Problems with these separate components can often present similar symptoms, so it's good to discuss them all at the same time. We will look at common symptoms for other hardware devices in Chapter 12.
As you continue to learn and increase your troubleshooting experience, your value will increase as well. This is because, if nothing else, it will take you less time to accomplish common repairs. Your ability to troubleshoot from past experiences and gut feelings will make you more efficient and more valuable, which in turn will allow you to advance and earn a better income. We will give you some guidelines that you can use to evaluate common hardware issues that you're sure to face.
Before we get into specific components, let's take a few minutes to talk about hardware symptoms and their causes at a general level. This discussion can apply to a lot of different hardware components.
Some hardware issues are pretty easy to identify. If flames are shooting out of the back of your computer, then it's probably the power supply. If the power light on your monitor doesn't turn on, it's the monitor itself, the power cord, or your power source. Other hardware symptoms are a bit more ambiguous. We'll now look at some general hardware-related symptoms and their possible causes.
Have you ever been working on a computer and heard a noise that resembles fingernails on a chalkboard? If so, you will always remember that sound, along with the impending feeling of doom as the computer stops working.
Some noises on a computer are normal. The POST beep (which we'll talk about in a few pages) is a good sound. The whirring of a mechanical hard drive and power supply fan are familiar sounds. Some techs get so used to their “normal” system noises that if anything is slightly off pitch, they go digging for problems even if none are readily apparent.
A simple rule to remember about grinding noise and other random noises is this: for it to make a noise, it has to move. In other words, components with no moving parts (such as RAM, SSDs and CPUs) don't make sounds. Mechanical hard drives have motors that spin the platters. Power supply fans spin. Optical drives spin the discs. If you're hearing a grinding, whirring, scraping, or other noise that you didn't expect, these are the likely culprits.
If you hear a whining sound and it seems to be fairly constant, it's more than likely a fan. Either it needs to be cleaned (desperately) or replaced. Power supplies that are failing can also sound louder and quieter intermittently because a fan will run at alternating speeds.
The “fingernails on a chalkboard” squealing could be an indicator that the heads in a mechanical hard drive have crashed into the platter. Thankfully, this isn't very common today, but it still happens. (Future generations of technicians will never know this sound, with the prevalence of SSDs today!) Note that this type of sound can also be caused by a power supply fan's motor binding up. A rhythmic ticking sound is also likely to be caused by a mechanical hard drive.
Problems with optical drives tend to be the easiest to diagnose. Those drives aren't constantly spinning unless you put some media in them. If you put a disc in and the drive makes a terrible noise, you have a good idea what's causing the problem.
So what do you do if you hear a terrible noise from the computer? If it's still responsive, shut it down normally as soon as possible. If it's not responsive, then shut off the power as quickly as you can. Examine the power supply to see if there are any obvious problems such as excessive dust, and clean it as needed. Power the system back on. If the noise was caused by the hard drive, odds are that the drive has failed and the system won't boot normally. You may need to replace some parts.
If the noise is mildly annoying but doesn't sound drastic, boot up the computer with the case off and listen. By getting up close and personal with the system, you can often tell where the noise is coming from and then troubleshoot or fix the appropriate part.
Electronic components produce heat; it's a fact of life. While they're designed to withstand a certain amount of the heat that's produced, excessive heat can cause overheating and drastically shorten the life of components. There are two common ways to reduce heat-related problems in computers: heat sinks and cooling systems, such as case fans.
Any component with its own processor will have a heat sink. Typically these look like big, finned hunks of aluminum or another metal attached to the processor. Their job is to dissipate heat from the component so that it doesn't become too hot. Never run a processor without a heat sink! Nearly all video cards built today have GPUs with heat sinks as well.
Case fans are designed to take hot air from inside the case and blow it out of the case. There are many different designs, from simple motors to high-tech liquid-cooled models. Put your hand up to the back of your computer at the power supply fan and you should feel warm air. If there's nothing coming out, you either need to clean your fan out or replace your power supply. Some cases come with additional cooling fans to help dissipate heat. If your case has one, you should feel warm air coming from it as well.
Dust, dirt, grime, smoke, and other airborne particles can become caked on the inside of computers and cause overheating as well. This is most common in automotive and manufacturing environments. The contaminants create a film that coats the components, causing them to overheat and/or conduct electricity on their surface. Blowing out these exposed systems with a can of compressed air from time to time can prevent damage to the components. While you're cleaning the components, be sure to clean any cooling fans in the power supply or on the heat sink.
One way to ensure that dust and grime don't find their way into a desktop computer is to always leave the blanks (or slot covers) in the empty slots on the back of the case. Blanks are the pieces of metal or plastic that come with the case and cover the expansion slot openings. They are designed to keep dirt, dust, and other foreign matter from the inside of the computer. They also maintain proper airflow within the case to ensure that the computer doesn't overheat.
Components that overheat a lot will have shorter lifespans. Sometimes they simply fail. Other times, they will cause intermittent shutdowns before they fail. A PC that works for a few minutes and then locks up is probably experiencing overheating because of a heat sink or fan not functioning properly. To troubleshoot overheating, first check all fans inside the PC to ensure that they're operating, and make sure that any heat sinks are firmly attached to their chips.
In a properly designed, properly assembled PC case, air flows in a specific path driven by the power supply fan and using the power supply's vent holes. Make sure that you know the direction of flow and that there are limited obstructions and no dust buildup. Cases are also designed to cool by making the air flow in a certain way. Therefore, operating a PC with the cover removed can make a PC more susceptible to overheating, even though it's “getting more air.”
Although CPUs are the most common component to overheat, video cards are also quite susceptible, especially for those with high-end graphics needs such as video producers, graphics designers, and gamers. Occasionally other chips on the motherboard—such as the chipset or chips on other devices—may also overheat. Extra heat sinks, fans, or higher-end cooling systems may be installed to cool these chips.
If the system is using a liquid cooling system, know that they have their own set of issues. The pump that moves the liquid through the tubing and heat sinks can become obstructed or simply fail. If this happens, the liquid's temperature will eventually equalize with that of the CPU and other components, resulting in their damage. Dust in the heat sinks has the same effect as with non-liquid cooling systems, so keep these components as clean as you would any such components. Check regularly for signs of leaks that might be starting and try to catch them before they result in damage to the system.
A burning smell or smoke coming from your computer is never a good thing. While it normally gets warm inside a computer case, it should never be hot enough inside there to melt plastic components or cause visible damage, but it does happen from time to time. And power problems can sometimes cause components to get hot enough to smoke.
If you smell an odd odor or see smoke coming from a computer, shut it down immediately. Open the case and start looking for visible signs of damage. Things to look for include melted plastic components and burn marks on circuit boards. The good news about visible damage is that you can usually figure out which component is damaged pretty quickly. The bad news is that it often means you need to replace parts.
Visible damage to the outside of the case or the monitor casing might not matter much as long as the device still works. But if you're looking inside a case and see burn marks or melted components, that's a sure sign of a problem. Replace damaged circuit boards or melted plastic components immediately. After replacing the part, it's a good idea to monitor the new component for a while too. The power supply could be causing the problem. If the new part fries quickly too, it's time to replace the power supply as well.
Intermittent problems are absolutely the worst to deal with. They are frustrating for technicians because the system will inevitably work properly when the tech is there to fix it. The users also get frustrated because they see the problem happen, but, of course, it works fine when the tech shows up!
Treat intermittent failures just as you would a persistent issue, if at all possible. See if there were any error messages, or if it happens when the user tries a certain action. Maybe it occurs only when the system has been on for a while or when a specific application is open. Try to narrow it down as much as possible. In many cases, an intermittent failure means that the device is slowly but surely dying and needs to be replaced. If it's something obvious, such as a network card or a disk read/write failure, you know what to replace. If not, and it's something random like intermittent lockups, it may take trial-and-error to find the right part to replace, especially if there were no error messages. Intermittent or unexpected lockups or shutdowns may be a motherboard, CPU, or RAM problem. If you have nothing else to go on, try replacing one at a time to see if that resolves the issue.
Every computer has a Basic Input/Output System (BIOS) or Unified Extensible Firmware Interface (UEFI) that acts as an interface between the computer's hardware and any operating system installed on that hardware. It's the first software (technically, firmware) that loads as a system boots, and as such, it plays a critical role in computer operation. If the BIOS/UEFI doesn't work properly, the user will never get to the point where their operating system starts. In this section, we will cover issues specific to the BIOS/UEFI, as well as the boot process it controls, which is the power-on self-test (POST).
As already mentioned, the BIOS/UEFI is the layer between the hardware and the operating system. Older systems may have a BIOS; UEFI is newer and has more features.
Out-of-Date BIOS First, computer BIOSs don't go bad; they just become out of date. This isn't necessarily a critical issue; they will continue to support the hardware that came with the box. It does, however, become an issue when the BIOS doesn't support some component that you would like to install—virtualization, for instance.
Most of today's BIOSs are written to an EEPROM and can be updated through the use of software. This process is called flashing the BIOS. Each manufacturer has its own method for accomplishing this. Check the documentation for complete details.
Checking the Boot Priority Finally, remember that your BIOS also contains the boot priority (also sometimes called the boot sequence) for your system. You probably boot to the first hard drive in your system (the one that contains the OS boot files), but you can also set your BIOS to boot from a secondary hard drive, an optical drive, a USB port, or the network. If your computer can't find a proper boot device, it could be that it's attempting to boot to an incorrect device. Check the BIOS to see if you need to change the boot sequence. To do this, perform the following steps:
If the changes don't hold the next time you reboot, check the battery.
FIGURE 11.1 UEFI boot priority settings
Every computer has a diagnostic program built into its BIOS/UEFI called the POST. When you turn on the computer, it executes this set of diagnostics. Many steps are involved in the POST, but they happen very quickly, they're invisible to the user, and they vary among BIOS/UEFI vendors. The steps include checking the CPU, checking the RAM, checking for the presence of a video card, and verifying basic hardware functionality. The main reason to be aware of the POST's existence is that if it encounters a problem, the boot process stops. Being able to determine at what point the problem occurred could help you troubleshoot.
If the computer doesn't perform the POST as it should, one way to determine the source of a problem is to listen for POST code beeps, also known as a beep code. This is a series of beeps from the computer's speaker. A successful POST generally produces a single beep. If there's more than one beep, the number, duration, and pattern of the beeps can sometimes tell you what component is causing the problem. However, the beeps differ depending on the BIOS manufacturer and version, so you must look up the beep code in a chart for your particular BIOS. AMI BIOS, for example, relies on the number of beeps and uses patterns of short and long beeps. Unfortunately, not all computers today give any beep codes because they don't contain the internal piezoelectric speaker.
Another way to determine a problem during the POST routine is to use a POST card. This is a circuit board that fits into an expansion slot (PCIe, PCI, or USB) in the system and reports numeric codes as the boot process progresses. Each code corresponds to a particular component being checked. If the POST card stops at a certain number, you can look up that number in the manual for the card to determine the problem. Figure 11.2 shows an example of a PCI POST card. USB POST cards are easy to use—you don't have to crack the case to check for POST errors—and they can be used to test laptops as well.
FIGURE 11.2 PCI POST card
POST card 98usd by Rumlin—Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.
Most motherboard and CPU problems manifest themselves by the system appearing to be completely dead. However, “completely dead” can be a symptom of a wide variety of problems, not only with the CPU or motherboard but also with the RAM or the power supply. At other times, a failing motherboard or CPU will cause the system to lock up completely, or “hang,” requiring a hard reboot, or the failing motherboard or CPU may cause continuous reboots. A POST card may be helpful in narrowing down the exact component that is faulty.
When a motherboard fails, it's usually because it has been damaged. Most technicians can't repair motherboard damage; the motherboard must be replaced. Motherboards can become damaged due to physical trauma, exposure to electrostatic discharge (ESD), or short-circuiting. To minimize the risk, observe the following rules:
A CPU may fail because of physical trauma or short-circuiting, but the most common cause for a CPU not to work is overheating, and most overheating issues are due to installation failures. This means that the heat sink and/or fan must be installed properly along with the processor. For example, air gaps in the thermal paste between the CPU and heat sink can cause the processor to run hot and eventually burn up. With a PGA- or LGA-style CPU, ensure that the CPU is oriented correctly in the socket. With an older SECC- or ZIF-style CPU, make sure the CPU is completely inserted into its slot or socket.
Input/output (I/O) ports are most often built into the motherboard and include USB as well as legacy parallel and serial ports. All of them are used to connect external peripherals to the motherboard. When a port doesn't appear to be functioning, make sure the following conditions are met:
If you suspect that it's the port, you can purchase a loopback plug to test its functionality. If you suspect that the cable, rather than the port, may be the problem, swap out the cable with a known good one. If you don't have an extra cable, you can test the existing cable with a multimeter by setting it to ohms and checking the resistance between one end of the cable and the other.
Use a pin-out diagram, if available, to determine which pin matches up to which at the other end. There is often—but not always—an inverse relationship between the ends. In other words, at one end pin 1 is at the left, and at the other end it's at the right on the same row of pins. You see this characteristic with D-sub connectors where one end of the cable is male and the other end is female.
Sometimes you will run into a computer with no video output—a black screen. This symptom could be the fault of a few different components. Recall that for video to be produced, there needs to be a video card, cable, and a screen, and the screen may have several components such as the screen itself, inverter, backlight, and others. We'll cover it here, though, because with so many motherboards today having video circuitry built into them, it could very well be the motherboard.
If there's no video, as always, check the obvious first. Is the monitor plugged in and turned on? Does it appear to be getting a signal from the video card? Does the light on the monitor make it look like the monitor has gone into sleep mode? If so, perhaps turning the monitor off and back on will do the trick.
If you've checked the usual suspects, try a different monitor and video cable. If there's still no video, then you may try a different video card. On almost all motherboards with built-in video circuitry, the onboard video electronics will be disabled when an expansion video card is installed. Of course, make sure you're plugging the monitor into the new video card and not the video port on the motherboard! (It's happened to all of us, and yes, it can be kind of embarrassing.) If the video circuitry on the motherboard is faulty, either use an expansion card or replace the board.
Many motherboards have capacitors on them, which store electricity. They are short cylindrical tubes. Sometimes, when capacitors fail, they will swell and brownish-red electrolyte residue may seep out of the vents in the top—an example is shown in Figure 11.3. These are called distended capacitors, also known as capacitor swelling.
If a capacitor fails, the motherboard will not work. You have a couple of options:
FIGURE 11.3 Distended capacitors on a motherboard
By Bushytails at English Wikipedia—Own work, CC BY-SA 3.0
Isolating memory issues on a computer is one of the most difficult tasks to do properly because so many memory problems manifest themselves as software issues. For example, memory problems can cause application crashes and produce error messages such as general protection faults (GPFs). Memory issues can also cause a fatal error in your operating system, producing proprietary crash screens such as the infamous Blue Screen of Death (BSOD) in Windows or the rotating pinwheel in macOS. Sometimes these are caused by the physical memory failing. At other times, they are caused by bad programming, when an application writes into a memory space reserved for the operating system or another application.
In short, physical memory problems can cause app and system lockups, unexpected shutdowns or reboots, or the errors mentioned in the preceding paragraph. They can be challenging to pin down. If you do get an error message related to memory, be sure to write down the memory address if the error message gives you one. If the error happens again, write down the memory address again. If it's the same or a similar address, then it's very possible that the physical memory is failing. You can also use one of several hardware- or software-based RAM testers to see if your memory is working properly. Sometimes switching the slot that the RAM is in will help, but more often than not the RAM needs to be replaced.
Memory issues can also be caused by the virtual memory, which is an
area of the hard drive set aside to emulate memory. The operating system
creates and manages a paging file (in Windows, it's called
PAGEFILE.SYS
) on the hard drive to act as memory when the
system needs more than what the physical RAM can provide; oftentimes,
this paging file is dynamic in size. If the hard drive runs out of room
for the paging file, memory issues can appear, or the system may have
sluggish performance. As a rule of thumb, ensure that at least 10
percent of the hard drive space is free.
Power supply problems can manifest themselves in two ways. In the first, you will see an obvious problem, such as an electrical flash or possibly a fire. In the second, the system doesn't respond in any way when the power is turned on. Hopefully you don't have to deal with many of the first type!
When the system doesn't respond (“no power”) when you try to power it up, make sure the outlet is functional and try a new power cable. If those check out, open the case, remove the power supply, and replace it with a new one. Partial failures, or intermittent power supply problems, are much less simple. A completely failed power supply gives the same symptoms as a malfunctioning wall socket, uninterruptible power supply (UPS), or power strip; a power cord that is not securely seated; or some motherboard shorts (such as those caused by an improperly seated expansion card, memory stick, CPU, and the like). You want to rule out those items before you replace the power supply and find that you still have the same problem as when you started.
At other times, the power supply fan might spin but the rest of the system does not appear to get power. This can be a power supply issue or possibly a motherboard issue. (Recall that the power supply plugs into the motherboard, and several devices don't have a power cable but power themselves from the motherboard.) Be aware that different cases have different types of on/off switches. The process of replacing a power supply is a lot easier if you purchase a replacement with the same mechanism.
If you're curious as to the state of your power supply, you can buy hardware-based power supply testers online starting at about $10 and running up to several hundred dollars. Multimeters are also effective devices for testing your power supplies.
Exercise 11.1 walks you through the steps of troubleshooting a few specific hardware problems. The exercise will probably end up being a mental one for you, unless you have the exact problem that we're describing here. As practice, you can write down the steps that you would take to solve the problem and then check to see how close you came to our steps. Clearly, there are several ways to approach a problem, so you might use a slightly different process, but the general approach should be similar. Finally, when you have found the problem, you can stop. As you go through each step, assume that it didn't solve the issue so you need to move on to the next step.
This chapter addressed the best practice methodology for resolving computer problems as well as troubleshooting core hardware components. In our discussion of troubleshooting theory, you learned that you need to take a systematic approach to problem solving. Both art and science are involved, and experience in troubleshooting is helpful but not a prerequisite to being a good troubleshooter. You learned that in troubleshooting, the first objective is to identify the problem. Many times, this can be the most time-consuming task.
Once you've identified the problem, you need to establish a theory of why the problem is happening, test your theory, establish a plan of action, verify full functionality, and then document your work. Documentation is frequently the most overlooked aspect of working with computers, but it's an absolutely critical step.
Next, we investigated the causes and symptoms of hardware problems, such as noise, excessive heat, burning smells and smoke, visible damage, and intermittent device failure. After the discussion of general hardware, we talked about issues specific to internal components, including the motherboard, CPU, RAM, and power supply.
Know the steps to take in troubleshooting computers. First, identify the problem. Then, establish a theory of probable cause, test the theory to determine the cause, establish a plan of action to resolve the problem and implement the solution, verify full system functionality, and, finally, document your findings, actions, and, outcomes.
Understand what happens during the POST routine. During the power-on self-test (POST), the BIOS checks to ensure that the base hardware is installed and working. Generally, one POST beep is good. Any more than that and you might have an error.
Understand problems related to the system BIOS/UEFI. BIOS/UEFI settings are maintained by the CMOS battery when the system is powered off. If the system keeps losing the date and time or boot settings, it could indicate a problem with the CMOS battery.
Know what is likely to cause unexpected shutdowns, system lockups, continuous reboots, and intermittent device failures. All these issues can be caused by a failing motherboard, CPU, or RAM. In the case of other intermittent device failures, it could be that specific device as well. Many times these issues are exacerbated by overheating.
Understand common problems that power supplies can cause. Power supplies can fry components, but they can also cause no power, grinding or squealing noises, spinning fans but no power to other devices, smoke, and burning smells.
Know which devices within a system can make loud noises. Loud noises are usually not welcome, unless you intend for them to come from your speakers. Generally speaking, only devices with moving parts, such as HDDs, power supplies, and fans, can produce unwanted loud noises.
Know how to avoid overheating. Using fans and heat sinks will help to avoid overheating. Also know that overclocking the processor can cause overheating.
Know the proprietary crash screens for Windows and Mac operating systems. Windows has the Blue Screen of Death (BSOD), whereas macOS uses the pinwheel.
Understand what a distended capacitor is. It's when a capacitor swells and possibly bursts, releasing a reddish-brown electrolyte. A motherboard with distended capacitors will likely fail. Don't touch the electrolyte!
Know what causes a black screen. Black screens are likely the fault of the video card, video cable, or display unit. If the video circuitry is built into the motherboard, it could be a faulty motherboard as well.
Understand causes of sluggish performance. Generally, sluggish performance is related to the memory or hard drive. If either are being overused, the system will be slower to respond. Overworked CPUs can also cause sluggish performance.
Know what causes application crashes. App crashes are most likely one of two things: a poorly coded app or faulty memory.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance-based questions on the A+ exam. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
Place the following steps (and sub-steps) of the best practice methodology in order:
THE FOLLOWING COMPTIA A+ EXAM 220‐1101 OBJECTIVES ARE COVERED IN THIS CHAPTER:
Hardware problems are sometimes very easy to identify. If you push the power button and nothing happens, you can be pretty confident that it's not the fault of the operating system. Other hardware problems are more complicated. For example, memory issues may cause errors that look like they're the fault of an application, or what appears to be a failing device may be resolved by updating the software driver. So while this chapter focuses on hardware, understand that in many cases you'll need to know how to navigate through apps and software settings to really narrow down the issue.
Network problems introduce even more variables to consider. Instead of just focusing on the local machine, now you need to consider the device at the other end of the connection as well. Perhaps that connection is wired, so it's easy to see if the cable is plugged in and port lights are flashing. If it's wireless, though, the signal travels magically through the air, and you might not have blinking lights to guide the way.
With all of the integration between software applications and hardware components, it can be challenging to understand where one stops and the other starts, or how their interoperation affects one another. To top it all off, you're probably going to be working in an environment that requires you to understand not just one computer but a network full of workstations, servers, switches, routers, and other devices, and how they should play nicely together.
This introduction isn't intended to scare you, but rather to point out that computers are complicated and troubleshooting can be hard. You already know that, but give yourself some latitude in troubleshooting, because it's not always easy. Prepare yourself with a solid foundation of knowledge and never be afraid to consult other resources, such as the Internet or coworkers. Situations will arise that make even the most experienced technicians shake their heads in frustration.
Sometimes, you will hear people say things like, “It just takes practice and experience to become good at troubleshooting.” Those words are of little comfort to someone who is relatively new and facing a challenging problem. Yes, experience does help, but even newer technicians can be effective troubleshooters if they understand the fundamentals and follow a logical process.
As you learned in Chapter 11, “Troubleshooting Methodology and Resolving Core Hardware Problems,” the best way to tackle any problem is to take a systematic approach to resolving it. This applies to the hardware and networking issues that we'll talk about here as well as the software and security issues that we will cover in Chapter 19, “Troubleshooting Operating Systems and Security.”
Recall from Chapter 11 that while experience helps, novice technicians can be effective troubleshooters as well. Troubleshooting becomes a lot easier if you follow logical procedures to help you develop experience. The first thing to do is always to check the easy stuff, such as physical cables and connections. You would be amazed at how many times the simple question “Is it plugged in?” resolves hardware problems. Second, see if anything has recently changed or if there are any recent incidents that might have caused the problem. For example, if someone's laptop won't boot up, you might not have a clue as to why. But if they tell you that they just dropped it down the stairs, you might have a better idea of where to start. Finally, narrow down the scope of the problem. Find out exactly what works and what doesn't. Knowing where the problem starts and stops helps you to focus your troubleshooting efforts.
This chapter finishes the hardware troubleshooting discussion we started in Chapter 11. Specifically, we will look at the following:
You may wonder why we broke hardware troubleshooting into multiple chapters. If we're being honest, it's due to the large number of problems that could arise, the volume of A+ exam objectives that creates, and the amount of room it takes to discuss them all. In other words, we didn't want to hit you with a massively long chapter!
Even with breaking it up, though, there's no way that a reasonably sized book could teach you about all the possible issues you could face. Nor would it be logical for you to try to memorize them all. Instead, we focus on some of the common issues and help you think through the process of narrowing down the possibilities so that you can efficiently resolve any problem you encounter. Armed with that knowledge, you should have the confidence to tackle any problem you face, whether it's familiar to you or not.
Even though storage devices aren't needed, persistent storage is almost universally included in computing devices today. And when they don't work, users tend to get upset. Losing a hard drive's worth of data is incredibly frustrating, especially if there's no suitable backup.
Storage devices present unique problems simply due to their nature—there are multiple technologies in use today. Some of them are devices with moving parts, which means that they are more prone to mechanical failure than a motherboard or a stick of RAM. Others function essentially like RAM and plug directly into a socket instead of needing a cable. If they fail, there's not much you can do besides replace them.
Before we get into common symptoms and solutions, remember that storage system problems usually stem from one of the following three causes:
The first and last causes are easy to identify, because in either case, the symptom will be obvious: the drive won't work. You won't be able to get the computer to communicate with the disk drive. The way to see which component is at fault is to disconnect and reconnect, or to try the device in another system (or try another drive in the affected system). However, if the problem is a bad or failing disk drive, the symptoms aren't always as obvious. Those are problems we will need to dive deeper into. In the following sections, we'll discuss hard disk problems, including using S.M.A.R.T. technology and RAID arrays. Then we'll finish by taking a quick look at optical drive issues.
While looking at light‐emitting diode (LED) status indicators is listed as a subobjective specifically under storage drives, it could be put in several different places as well. A lot of devices have lights that can indicate whether or not a component is working. Storage systems usually have some sort of activity indicator that blinks when the drive is busy either reading or writing data. If the light never comes on, or if the light is on constantly without flickering, there could be a problem.
External network attached storage (NAS) and redundant array of independent (or inexpensive) disks (RAID) storage enclosures have lights as well and may have many more than a standard desktop or laptop computer. For example, many RAID arrays have a light that only illuminates if a drive has failed and needs to be replaced. We'll get into RAID in more detail later in this section. The point is, look for indicator lights and understand what they're communicating to you.
Storage devices that have moving components will make sounds. Mechanical HDDs have a whirring sound as the platters spin, and an irregular ticking or clicking sound when reading and writing. An optical drive spins up when a disc is inserted, and it too will whirr. SSDs have no moving parts, and therefore they do not make sounds. (Well, technically, if one were to get fried it could make an electrical pop. If you hear that, the drive is toast.)
A grinding noise from a storage device is a very bad sign. That means there is a failure in the motor or spindle, or if it sounds more like fingernails on a chalkboard, it means the read/write heads have crashed into the platter and are cutting grooves into it. You'll only need to hear that sound once to remember it forever. If the drive is still operational, get all important data off the drive immediately and replace the drive. A regular, rhythmic ticking or clicking sound is bad too—that usually means the drive is failing or has failed. The solution is the same as for a grinding noise. If you can, get important data off the drive and then replace it.
If a storage device is plugged in and working, the BIOS/UEFI should detect it first, and then as the operating system loads, it will recognize the drive as well. If someone gives you the symptom that their hard drive isn't being found, the first thing to do is clarify where it's not being found. Is it the BIOS/UEFI, or in the operating system?
Bootable Device Not Found This could manifest itself in a few different ways, such as a complete failure to boot, the hard drive not being recognized by the BIOS/UEFI, or the OS not being found. Failure to boot at all likely means the drive is not properly connected or it's dead. Do your due diligence and reseat your connections and try different cables, or try the drive in another machine if possible. Most BIOSs/UEFIs today autodetect the hard drive. If that auto‐detection fails, it's bad news for the hard drive, unless there's a cable or connection issue.
Finally, a system that boots fine but can't find the OS could
indicate a problem with the master boot record (MBR) or boot sector on
the hard drive. To fix this in any current version of Windows, boot to
bootable media (USB or optical disc) and enter the Windows Recovery
Environment (WinRE). In WinRE, you can get to a command prompt and use
bootrec /fixmbr
to fix the MBR and
bootrec /fixboot
to fix the boot sector.
In this category are times when the drive is working, but perhaps not as well as it should or once did. A+ exam objectives that fall into this group include:
A failing hard drive might exhibit data loss or corruption or very slow (extended) read/write times. They can also be a symptom of the hard drive being too full. Hard drives move information around a lot, especially temporary files. If the drive doesn't have enough free space (at least 10 percent), it can slow down dramatically. The solution here is to remove files or old applications to free up space and look at defragmenting the hard drive. If problems persist, consider formatting the hard drive and reinstalling the OS. If the issues don't go away, assume that the hard drive is on its last legs.
Input/output operations per second (IOPS), pronounced eye‐ops, is an industry standard for how many reads and writes a storage unit can complete. IOPS is frequently quoted on dedicated storage systems such as NAS and RAID devices, but it's so variable and condition‐dependent that its usefulness is debatable. Still, if a device's IOPS steadily declines over time or is no longer fast enough to service the user's (most likely the network's) needs, it could be time to replace the device.
The most popular tool used to measure IOPS is Iometer (sourceforge.net/projects/iometer
);
it's open source and it's available for Windows and Linux. Iometer runs
simulated disk reads/writes and provides results in a graphical
interface (Figure 12.1) and a CSV file. For
purposes of the A+ exam, don't worry about memorizing any specific
metrics or thresholds for IOPS. Instead, understand what it is, know
that it can be measured, and know that if performance decreases over
time it could indicate an issue with a storage device.
FIGURE 12.1 Iometer test results
As of 2004, nearly every hard drive has been built with Self‐Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.) software installed. S.M.A.R.T. monitors hard drive reliability and theoretically can warn you in the event of an imminent failure. The idea behind S.M.A.R.T. is great. Who wouldn't want to know when their hard drive was going to fail so they could back up the drive? In practice, though, it seems to help manufacturers locate persistent issues by identifying hard drive design flaws more than it helps end users avoid catastrophic data losses. Helping hard drive manufacturers do a better job isn't a bad thing, but S.M.A.R.T. hasn't enjoyed widespread commercial success with end users. This can largely be attributed to the following three factors:
Let's address the three issues in order. First, you can download one of several graphical tools from the Internet if you want to run S.M.A.R.T. diagnostics on a hard drive. Table 12.1 gives you a few options. Each one has a free option, and they all offer a variety of hard drive diagnostic capabilities.
Name | Website |
---|---|
GSmartControl | http://gsmartcontrol.sourceforge.io |
SpeedFan | http://almico.com/speedfan.php |
HD Tune | http://hdtune.com |
CrystalDiskInfo | http://crystalmark.info/en/software/crystaldiskinfo |
TABLE 12.1 S.M.A.R.T. software utilities
Second, yes, S.M.A.R.T. reports a lot of metrics, not all of which make sense. Figure 12.2 shows the output from GsmartControl version 1.1.3; you can tell that two metrics appear to be problematic because they are highlighted. Pink highlights show a warning, and red highlights indicate a failure. The question is, which metrics are most likely to predict drive failure?
In 2014, Google and cloud service provider Backblaze ran large‐scale tests to determine which metrics most strongly correlated with drive failure. Their results showed five metrics, which are highlighted in Table 12.2.
ID | Attribute name | Description |
---|---|---|
05 | Reallocated sector count | The number of bad sectors that have been found and remapped during read/write processes. Any nonzero number could indicate a problem. |
187 | Reported uncorrectable errors | The number of errors that could not be recovered using hardware error correction |
188 | Command timeout | The number of failed hard drive read/write operations due to disk timeout |
197 | Current pending sector count | The number of unstable sectors waiting to be remapped |
198 | Uncorrectable sector count | The total number of bad sectors when reading from or writing to a sector |
TABLE 12.2 S.M.A.R.T. metrics most correlated with hard drive failure
FIGURE 12.2 S.M.A.R.T. report from GsmartControl
Interestingly enough, metrics related to higher temperatures or the number of reboots did not correlate to drive failure. The old adage that you should leave your computer running to make the hard drive last longer wasn't verified by the research. In addition, over half of the drives in the study failed without recording a sector error, and over 30 percent of the drives failed with no S.M.A.R.T. error whatsoever.
What does that mean for the drive shown in Figure 12.2, which has an error on ID 5? Maybe not much. The same drive passed that ID when scanned with SpeedFan (see Figure 12.3). The safe conclusion is that S.M.A.R.T. can provide useful diagnostics on a hard drive's health, but it's by no means a guaranteed problem finder.
FIGURE 12.3 SpeedFan S.M.A.R.T. output
As for the last issue (there being little consistency between hard drive manufacturers), that's an annoyance but not a critical issue. All it really means is that you can't compare data from one drive manufacturer with that of another. It's likely that if you're running S.M.A.R.T. data on a hard drive, you're primarily concerned with that drive's performance, not how it compares to other hard drives. If you have a situation where you're worried about a drive, you can benchmark its performance and track it over time, or you can just replace it.
Exercise 12.1 has you download a S.M.A.R.T. software utility and test your hard drive.
If you are using a redundant array of independent (or inexpensive) disks (RAID) system, you have additional challenges to deal with. First, you have more disks, so the chance of having a single failure increases. Second, you more than likely have one or more additional hard disk controllers, so again you introduce more parts that can fail. Third, there will likely be a software component that manages the RAID array.
Boiling it down, though, dealing with RAID issues is just like dealing with a single hard drive issue, except that you have more parts that make up the single storage unit. If your RAID array isn't found or stops working, try to narrow down the issue. Is it one disk that's failed, or is the whole system down, indicating a problem with a controller or the software? Along with external enclosures, which require a separate connection to the computer, most external RAID systems have status indicators and troubleshooting utilities to help you identify problems. Definitely use those to your advantage.
Finally, the problem could be dependent on the type of RAID you're using. If you are using RAID 0 (disk striping), you actually have more points of failure than a single device, meaning that you're at a greater risk of failure versus using just one hard drive. One drive failure will cause the entire set to fail. RAID 1 (disk mirroring) increases your fault tolerance; if one drive fails, the other has an exact replica of the data. You'll need to replace the failed drive, but unless both drives unexpectedly fail, you shouldn't lose any data. If you're using RAID 5 (disk striping with parity), a minimum of three drives are needed and a single drive failure usually means that your data will be fine, provided that you replace the failed drive. If two or more drives fail, the RAID 5 array will be lost and you will need to fix the array and then restore the data from backup. RAID 10 is a mirrored striped set that requires at least four drives. As long as one drive in each mirrored pair is functional (just like in RAID 1), you shouldn't lose any data.
Optical drive (CD, DVD, and Blu‐ray) problems are normally media‐related. Although optical disc technology is pretty reliable, it's not perfect. One factor to consider is the cleanliness of the disc. On many occasions, if a disc is unreadable, cleaning it with an approved cleaner and a lint‐free cleaning towel will fix the problem. The next step might be to use a commercially available scratch‐removal kit. If that fails, you always have the option to send the disc to a company that specializes in data recovery.
If the operating system doesn't see the drive, start troubleshooting by determining whether the drive is receiving power. If the tray will eject, you can assume the drive has power. Next, check BIOS/UEFI Setup (SATA or PATA drives) to make sure that the drive has been detected. If not, check the primary/secondary jumper on the drive, and make sure that the PATA adapter is set to Auto, CD‐ROM, or ATAPI in BIOS/UEFI Setup. Once inside the case, ensure that both the drive and motherboard ends are securely connected and, on a PATA drive, that the ribbon cable is properly aligned with pin 1, the edge that has the red or pink stripe, being closest to the power connector.
To play movies, a DVD or Blu‐ray drive must have Moving Picture Experts Group (MPEG) decoding capability. This is usually built into the drive, video card, or sound card, but it could require a software decoder. If DVD or Blu‐ray data discs will read but not play movies, suspect a problem with the MPEG decoding.
If an optical drive works normally but doesn't perform its special capability (for example, it won't burn discs), perhaps you need to install software to work with it. For example, with CD‐RW drives, unless you're using an operating system that supports CD writing (and nearly all OSs today do), you must install CD‐writing software in order to write to CDs.
Troubleshooting video problems is usually fairly straightforward because there are only a few components that could be causing the problem. You can sum up nearly all video problems with two simple statements:
In the majority of cases when you have a video problem on a desktop computer, a good troubleshooting step is to check the monitor by transferring it to another machine that you know is working. See if it works there. If the problem persists, you know it's the monitor. If it goes away, you know it's the video card (or possibly the driver). Is the video card seated properly? Is the newest driver installed?
The CompTIA A+ exam objectives list 11 symptoms you should understand and know how to fix. We'll break them into three categories:
Let's take a look at each of them now.
Imagine you're getting ready for a big presentation. Everyone is gathered in the room, and you connect your laptop to the video projector or external monitors—and there's no display. The audience sighs and people start getting fidgety or multitasking. It's not a great situation. Odds are that if this hasn't happened to you, then you've been on the opposite side—sitting there wishing the presenter could figure it out. What's the best way to resolve this type of issue? Try these three steps:
FIGURE 12.4 LCD cutoff switches (video toggle keys)
If everything checks out, it's possible there could be physical cabling issues. You can try disconnecting and reconnecting the video cables or another cable if possible.
This group of symptoms deals specifically with the image on the screen, or the lack thereof. Here are the ones you should know:
Fuzzy Image Resolving a fuzzy image problem will differ depending on the display device. For example, projectors have focus mechanisms that allow them to produce images on screens at different distances. A lot of projectors will try to autofocus but will also have onscreen menu options or a knob around the outside of the lens to manually adjust the focus.
A fuzzy image on an LCD screen is an entirely different story. It could be caused by external interference such as fluorescent lights, magnetic devices, and electrical devices such as fans, lamps, and speakers. Check for any of those nearby. If the display uses a cable, it could also be a loose or bad cable. Finally, it could be that the resolution is set for something that the display can't handle, or at least can't handle well. (To be fair, most of the time if the resolution isn't supported, the image will appear warped or stretched and distorted, but it could be fuzzy as well.)
There are a few things you can try to fix it. In Windows, right‐click an open area of the desktop and choose Display Settings (Figure 12.5). Once there, you can change the display resolution. Another thing to try is to click Advanced Scaling Settings (Figure 12.6). Turn on the toggle for Let Windows Try To Fix Apps So They're Not Blurry. Custom scaling features, configured on the same page, could cause fuzziness as well.
FIGURE 12.5 Windows Display Settings
FIGURE 12.6 Advanced Scaling Settings
Flashing Screen Sometimes a display will either subtly flicker or flash off and on. Those two symptoms are caused by different things. Flickering screens are most commonly caused by the backlight starting to fail. In those cases, replace the backlight.
Flashing off and on could be the backlight, but it could also be a loose cable or an unsupported resolution. Try the usual fixes, including checking the cables (if applicable), changing the resolution, or reinstalling the video card driver.
FIGURE 12.7 Laptop function keys
In this final section on video, projector, and display issues, we look at a few symptoms that don't fit nicely into our other sections. This is kind of the grab bag of random video and display problems.
Audio Issues Many display units today have built‐in speakers. The most common reason people have audio problems is because something is muted, but it could also be a cable or connection issue.
First, check the display unit to ensure it's not muted and that the volume is turned up to a reasonable level. This is done on the display's onscreen menu. Next, check to ensure that the computer's audio output is set to the correct device and that it's not muted or the volume isn't turned down. In Windows, right‐click the speaker icon on the taskbar, then choose Open Sound Settings (Figure 12.8). Choose the correct device using the drop‐down box in the Output section, and ensure that the master volume is turned up.
FIGURE 12.8 Windows sound settings
Intermittent Projector Shutdown We noted earlier that projectors create a lot of heat. When a projector overheats, it will shut itself off to avoid frying components or the bulb. This is the most likely cause of intermittent shutdowns. After the projector cools off, perform a little maintenance cleaning. Most projectors have an air filter to keep dust and debris out of it—check to ensure that's clean and replace it if necessary. Also check to make sure the cooling fan is operational and blowing out warm air.
Monitors can shut down intermittently as well due to overheating. It was more common on older CRT monitors than it is on LCD ones, but it can still happen. Be sure the air vents on the back of the display unit are clear from dust and debris. If the problem persists, it's best to replace the monitor.
Other graphics issues can be attributed to the memory installed on the video card. This is the storage location of the screens of information in a queue to be displayed by the monitor. Problems with the memory modules on the video card have a direct correlation to how well it works. It follows, then, that certain unacceptable video‐quality issues, such as jerky refresh speeds or lags, can be remedied by adding memory to a video card (if possible). Doing so generally results in an increase in both quality and performance. If you can't add memory to the video card, you can upgrade to a new one with more memory.
Mobile devices, for the most part, are essentially the same types of devices as desktops, but troubleshooting the two can feel very different. While the general troubleshooting philosophies never change—steps such as gathering information, isolating the problem, and then testing one fix at a time—the space and configuration limitations can make troubleshooting smaller devices more frustrating. For purposes of the discussion here, the term mobile devices includes laptops and anything smaller.
We will look at four areas where mobile devices could have different problems from their desktop counterparts: power and heat, input/output, connectivity, and damage. Much of what we cover will be more closely related to laptops than smaller mobile devices, but the concepts generally apply to mobile computers of all sizes. We'll call out specifics for small mobile devices where applicable.
Mobile devices are different from desktops in that they're designed to work without a continually plugged‐in power source. That freedom introduces complexities that can cause power‐related problems, though—specifically, the battery and charging the battery. In addition, because of their compact nature, mobile devices are more prone to overheating. In this section we'll look at battery‐related issues as well as overheating problems.
Mobile devices are of course meant to be mobile and not plugged in at all times. It's a bit ironic, then, that a good question to ask if a mobile device doesn't seem to power up is, “Is it plugged in?” Everyone hates getting asked that question, but it's a critical question to ask, even with mobile devices. If the device works when it's plugged in but not unplugged, you've narrowed down the problem. You can't assume that the battery is working (or is attached) as it's supposed to be. Always check power and connections first!
If the laptop works while it's plugged in but not while on battery power, the battery itself may be the culprit. As batteries get older, they are not able to hold as much of a charge and, in some cases, are not able to hold a charge at all. That is to say, the battery health may be poor. If the battery won't charge while the laptop is plugged in, try removing the battery and reinserting it. If it still won't charge, you might want to replace the battery.
Another issue that small devices can have is an extremely short battery life. We're not talking about when people complain that their laptop only runs for an hour and a half when they are playing a DVD while surfing the Internet and talking to their friends on their Bluetooth headset over a social media instant messenger. No, that's bound to drain your battery quickly. What we're referring to here is when a laptop battery only lasts for an hour or so after a full charge with normal usage, or if a mobile phone battery is only able to power the device for 30 minutes or so. These things happen.
If it's a laptop, you can try to perform a battery calibration, as we discussed in Chapter 9. For all mobile devices, you can try to drain the battery completely and then charge it fully before turning the device back on. If these options don't work, then it's likely that the battery needs to be replaced.
Many laptop power adapters have a light indicating that they're plugged in. If there's no light, check to make sure that the outlet is working, or switch outlets. Also, most laptops have a power‐ready indicator light when plugged into a wall outlet as well. Check to see if it's lit. If the outlet is fine, try another power adapter. They do fail on occasion.
Smaller mobile devices will have a lightning bolt next to their battery icon or an animated filling battery when charging. If the device doesn't appear to charge, the same culprits apply: it could be the outlet, the adapter, or the device itself.
If you're working on a DC adapter, the same concepts apply. Check for lights, try another adapter if you have one, or try changing plugs, if possible. For example, if you're using a DC outlet in a car, many newer models have secondary power sources, such as ones in the console between the seats.
Another thing to remember when troubleshooting power problems is to remove all external peripherals. Strip your laptop down to the base computer so that there isn't a short or other power drain coming from an external device.
The last power issue that we need to discuss is a swollen battery. As the term suggests, the battery physically swells in size. It can be caused by a number of things, including manufacturer defects, age, misuse, using the wrong adapter for charging, or leaving the laptop constantly plugged into a wall outlet. Inside the battery, the individual cells become overcharged, causing them to swell. Sometimes the swelling is barely noticeable, but it can cause the device case to crack or pop apart. Other times it's pretty obvious, such as the one shown in Figure 12.9.
FIGURE 12.9 iPhone with a swollen battery
Mpt‐matthew at English Wikipedia [GFDL (www.gnu.org/copyleft/fdl.html
)
or CC BY‐SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0
)]
If you have a swollen battery, turn the device off immediately and make sure that it's not plugged into a charger. If the battery is removable, you can try to remove it, if you wish, but be very careful. Swollen batteries are more prone to explosions than normal batteries because the casing is already compromised. If you are able to remove it, place it into a safe container, just in case there are further issues. If the battery is not removable, it's time for a new device. In either case, take the battery or device to a proper recycling center to dispose of it. Never just throw it in the trash; it can explode and harm sanitation workers, as well as cause significant damage to the environment.
Smaller devices have greater potential to overheat than do their larger brethren. Space is at a premium, so all the components are packed tightly together, which means less room for each component to breathe. Manufacturers realize this, of course, so they use components that generate less heat. Overheating can still be a problem, though. If your mobile device is overheating, turn it off to let it cool down. It could be from overuse, or perhaps it did not have proper ventilation (for example, if it was stuffed into someone's pocket or purse). On laptops, check to ensure that the cooling fan is working and not full of dust or debris. If the overheating is persistent, you have a few options. The first is to test or replace the battery, as that's the most likely culprit. If overheating still happens, you may have to replace the device.
Laptop keyboards aren't as easy to switch out as desktop keyboards. You can, however, very easily attach an external USB keyboard to your laptop if the keys on your laptop don't appear to work.
If the keyboard doesn't seem to respond at all, try pressing the Num Lock and Caps Lock keys to see if they toggle the Num Lock and Caps Lock lights on and off. If the lights don't respond, the keyboard isn't functioning properly. Try rebooting the system. (You will probably have to press and hold the power button for 5 seconds, and the system will shut off. Wait 10 seconds, and press the power button again to turn it back on.) If that doesn't fix the problem, you probably have faulty hardware.
Another problem unique to laptop keyboards is the Fn key. (It can be your friend or your enemy.) You can identify it on your laptop keyboard because it's in the lower‐left corner and has the letters Fn on it (often in blue), as shown in Figure 12.10. If the Fn key is stuck on, the only keys that will work are those with functions on them. If you look at other keys on your laptop, several of them will have blue lettering too. Those are the functions that the keys may perform if you press and hold the Fn key before pressing the function key that you want. If the Fn key is stuck on, try toggling it just as you would a Caps Lock or Num Lock key.
FIGURE 12.10 The Fn key on a laptop
One of the conveniences that users often take advantage of in laptops is a built‐in pointing device. Most laptops have touchpads or point sticks that function much like a mouse. They're nice because you don't need to carry an external mouse around with you. While these types of devices are usually considered very handy, some people find them annoying. For example, when you are typing your palm might rest on the touchpad, causing erratic pointer behavior. This is referred to as a ghost cursor because it seems like the cursor just randomly jumps all over the screen. You can turn the touchpad off through Control Panel. While understanding that you can turn it off on purpose, remember that it can be turned off accidentally as well. Check to make sure that it's enabled. Some laptops allow you to disable or change the sensitivity of the touchpad as well, just as you can adjust the sensitivity of your mouse.
Another potential issue is cursor drift, where the mouse cursor will slowly drift in one direction even though you are not trying to make it move. This issue is generally related to the point stick not centering properly after it's been used. If you have cursor drift, try using the point stick and moving it back and forth a few times to get it to re‐center itself. You can also try calibrating it within the operating system (most manufacturers make it a tab in Mouse properties), or rebooting. If the problem persists, either disable or replace the point stick.
Finally, here are two issues you may encounter with mobile device displays: digitizer issues and broken screens. Recall from Chapter 9 that a digitizer is a device that can be written or drawn on, typically with the touch from a human finger. Most mobile devices have a digitizer built into the display unit. It may be the glass of the display itself, or it might be implemented as an overlay for the display. Either way, if it's not functioning, that can cause problems. You can probably work around it on a laptop, but a smartphone or tablet with a nonworking digitizer is pretty useless.
In touch‐enabled Windows devices, digitizers can be calibrated under Control Panel ➢ Hardware And Sound ➢ Tablet PC Settings ➢ Calibrate The Screen For Pen Or Touch Input. Rebooting may also help. For iOS and Android tablets and phones, if the digitizer isn't working, the only troubleshooting step is to power it off and restart the device. With any device, if the digitizer isn't working, the next step is to replace the screen or the device.
A broken screen, while unfortunate, is all too common with mobile devices. Considering the beating they take on a regular basis, it's a little surprising it doesn't happen more often. First, to help avoid broken screens, make sure all of your mobile users have screen protectors. If a screen does get broken, either replace the screen or replace the device.
Nearly every mobile device sold is equipped with integrated wireless networking, and most have Bluetooth built in as well. In many cases, the wireless antenna is run into the LCD panel. This allows the antenna to stand up higher and pick up a better signal.
If your wireless networking isn't working on a laptop, do the following:
FIGURE 12.11 Network card toggle switch above the keyboard
Click the Configure button to open up more Properties, including driver management.
Some network cards have their own proprietary configuration software as well. You can also often check here by clicking a tab (often called Wireless Networks) to see if you're getting a signal and, if so, the strength of that signal.
FIGURE 12.12 Wireless network connection properties
Check the strength of the signal.
A weak signal is the most common cause of intermittent wireless networking connection problems. If you have intermittent connectivity and keep getting dropped, see if you can get closer to the wireless access point (WAP) or remove obstructions between you and the WAP. Failing network cards and connectivity devices can also cause intermittent wireless networking connection failures.
If the wireless connection fails but the system has a wired RJ‐45 port, try plugging it in. For this, you will need an Ethernet cable and, of course, a wired network to plug it into. But if you get lights on the NIC, you might get on the network.
The principles behind troubleshooting network or Bluetooth connectivity issues on mobile phones and tablets are the same as on laptops. The big difference is that you can't try an external network card if your internal card is failing. We originally looked at some of these settings in Chapter 10, “Mobile Connectivity and Application Support,” but now is a good time to review them. The first thing to check is that the network connection or Bluetooth is enabled, which also means double‐checking that airplane mode is not turned on. On Android and iOS devices, this is done through Settings. Figure 12.13 shows iOS network settings, and Figure 12.14 shows Android network settings. Toggle the connection off and then back on to reset it; often, that will resolve connectivity issues.
Another way to access network settings in iOS is from the Control Center. You can do this from both the lock screen and the home screen. Simply swipe your finger down from the very top of the iPhone's touchscreen, and you will get the Control Center, similar to what's shown in Figure 12.15. In Android, open the notifications area by swiping down from the top of the screen (you may need to swipe down twice), and network settings will be there as well, as shown in Figure 12.16.
FIGURE 12.13 iOS network settings
FIGURE 12.14 Android network settings
FIGURE 12.15 iPhone Control Center
FIGURE 12.16 Android notifications center
Mobile devices take much more of a beating than stationary devices do, which is why cases and screen protectors are needed accessories. Sometimes things happen, though, and a device gets physically damaged. Here we will look at two types of physical damage: liquid damage and physically damaged ports. Then we will take a look at malware and how to avoid issues it can cause.
A device can become damaged in any number of ways, with dropping being the most common. Even if you have a great case on your phone, an airborne expedition down a flight of concrete stairs probably isn't going to have a happy ending. Similarly, liquid can do nasty things to electronics as well.
If a laptop gets doused in a liquid, it's best to turn it off as soon as possible and let it dry out. If it was a spill on the keyboard and no liquid got into ports, the computer is probably salvageable. If liquid got inside the ports, they could be very difficult to clean out and the liquid can cause connectivity problems for those ports.
For significant liquid damage, the laptop can be taken apart and the components—even circuit boards such as the motherboard—can be cleaned with demineralized water and a lint‐free cloth. Disassemble the components, clean them with water and the cloth, and let them thoroughly dry. Reassemble, and see if it works. Several years ago, one of our friends had a toddler son who decided to relieve himself on the laptop. The demineralized water and a thorough drying did the trick—the machine still worked. We can't guarantee success in every situation, however!
Mobile devices are much more liquid‐friendly than laptops are. Many phones and tablets today are water‐resistant, if not entirely waterproof. Every smartphone that is considered water‐resistant will have an ingress protection (IP) rating, such as IP67 or IP68. The first digit, which will be from 0–6, represents the device's ability to withstand solid foreign material such as dust. The second digit, which will be from 0–8, shows its moisture resistance. Sometimes you will see a rating such as IPX6, which means the device has not been tested for dust resistance and has a moisture resistance rating of 6.
For a device to be considered waterproof, it needs to have a moisture resistance rating of 7 or 8. A 7 rating means the device is protected from damage from immersion in water with a depth of up to 1 meter (3.3 feet) for up to 30 minutes. An 8 rating is given to devices that can withstand greater depth and time of immersion, which must be specified by the manufacturer. For the best in protection, buy a device with an IP67 or IP68 rating. Devices with no IP code might or might not survive heavy rain, sprays from water sources, or an accidental dunking.
If you suspect a mobile device has suffered water damage, first, immediately turn it off. Remove the case and anything else that can be removed, such as the SIM card and possibly the battery. Dry everything you can with a lint‐free cloth. Then you have a few choices. One is to let it air dry for at least 48 hours. Or you can try the “rice trick.” That is, put the device in a sealed container, covered in uncooked rice, and let it sit for 48 hours. The rice should soak up all of the water in the device. Note that professional opinions are divided on the rice trick. Some experts say it works well, others say that residue in the rice can damage electronic components. But if the phone isn't working anyway, how much risk is it to try?
The A+ exam objectives list physically damaged ports as a mobile device symptom. Sometimes the ports are obviously damaged, and other times they simply fail to work. In either case, the only remedy is to replace the port, which usually means replacing several components, including the motherboard on a laptop, or replacing the entire mobile device.
Malware is malicious software designed to damage, disrupt, or gain unauthorized access to a computer system. Malware infections are one of the most common security risks that you will encounter. Let's look at malware on laptops as well as mobile devices.
Laptops running Windows or macOS have the same vulnerability to malware as their desktop cousins do. To help guard against malware, install antimalware software. These are the four main classes of applications to help protect your system against malware and hackers:
There are also suites available that combine multiple security functions; for example, the Norton Security suite includes antivirus, antimalware, and antispam features, along with identity protection tools, a software firewall, a backup tool, and a PC tune‐up tool. McAfee's LiveSafe is similar. In addition, there is some overlap between the types of threats each application guards against; for example, an antivirus program may also target some types of nonvirus malware.
Even if you have an antimalware application installed, it's not perfect. Occasionally a virus or other malware may get around it, especially a new threat (and especially if you haven't updated your definitions lately). When a system is infected with a virus, a worm, a Trojan horse, or other malicious software, you need to remove it immediately. Here are the five steps to take to remove malware:
Many people believe, incorrectly, that mobile devices running iOS or Android can't get infected with malware. Or, others believe that iOS is totally safe but Android is less so. The second statement is a little closer to the truth, but iOS is still vulnerable to malware. Let's take a look at the four most common ways mobile devices can contract malware:
A few good rules of practice to avoid malicious apps or malware on a mobile device are:
Antimalware software such as Norton and Avast can be purchased for iOS and Android as well.
Even though society is moving away from paper forms, printers are still very common peripherals. Printers are also the most complex peripheral as far as troubleshooting is concerned; this arises from complications in putting ink to paper. There are several different ways that this can be accomplished, but the end result is all pretty much the same.
Different types of printers work in different ways, so you would expect that laser printers might have different issues from impact printers. Because problems are often dependent on the type of printer you're using, we've chosen to break down this discussion by printer type. We'll start with a quick review of the technology and then get into specific issues. At the end, we'll look at the process of managing the print spooler, which is the same regardless of the printer type in use.
Impact printers are so named because they rely on making a physical impact in order to print. These are typically dot‐matrix or daisy wheel printers. The impact printer's print head will be activated and forced up against the ink ribbon, making a character or pattern on the paper. Impact printers are relatively simple devices; therefore, only a few problems usually arise. We will cover the most common problems and their solutions here.
Problems with print quality are easy to identify. When the printed page comes out of the printer, the characters may be too light or have dots missing from them. Table 12.3 details some of the most common impact print quality problems, their causes, and their solutions.
Characteristics | Cause | Solution |
---|---|---|
Consistently faded or light characters | Worn‐out printer ribbon | See if you can adjust the print head to be closer to the ribbon. If not (or if it doesn't help), replace the ribbon with a new, vendor‐recommended ribbon. |
Print lines that go from dark to light as the print head moves across the page | Printer ribbon‐advance gear slipping | Replace the ribbon‐advance gear or mechanism. |
A small, blank line running through a line of print (consistently) | Print head pin stuck inside the print head | Replace the print head. |
A small, blank line running through a line of print (intermittently) | A broken, loose, or shorting print head cable | Secure or replace the print head cable. |
A small, dark line running through a line of print | Print head pin stuck in the out position | Replace the print head. (Pushing the pin in may damage the print head.) |
Printer making a printing noise, but no print appears on the page | Worn, missing, or improperly installed ribbon cartridge | Replace the ribbon cartridge correctly. |
Printer printing garbage, such as garbled characters | Cable partially unhooked, wrong driver selected, or bad printer control board (PCB) | Hook up the cable correctly, select the correct driver, or replace the PCB (respectively). |
TABLE 12.3 Common impact print quality problems
Printer jams (aka “the printer crinkled my paper”) are very frustrating because they always seem to happen more than halfway through your 50‐page print job, requiring you to take time to remove the jam before the rest of your pages can print. A paper jam happens when something prevents the paper from advancing through the printer evenly. There are generally three causes of printer jams: an obstructed paper path, stripped drive gears, and using the wrong paper.
Obstructed paper paths are often difficult to find. Usually it means disassembling the printer to find the bit of crumpled‐up paper or other foreign substance that's blocking the paper path. A common obstruction is a piece of the “perf”—the perforated sides of tractor‐feed paper—that has torn off and gotten crumpled up and then lodged in the paper path. It may be necessary to remove the platen roller and feed mechanism to get at the obstruction.
Stripped drive gears cause the paper to feed improperly, causing it to crinkle and cause jams. Using the wrong paper, such as thick paper when the platen has been set for thin paper, can also cause jams. When loading new paper, always be sure that the platen is properly adjusted.
Impact printers are used for multipart (or multipage) forms. Those forms are typically three or more sheets of paper thick. If the multipage forms are not feeding properly, it could be that the printer is set to receive paper that is too thin or too thick. Check the platen and adjust accordingly.
Printers use stepper motors to move the print head back and forth as well as to advance the paper. The carriage motor is responsible for the back‐and‐forth motion while the main motor advances the paper. These motors get damaged when they are forced in any direction while the power is on. This includes moving the print head over to install a printer ribbon as well as moving the paper‐feed roller to align paper. These motors are very sensitive to stray voltages. If you are rotating one of these motors by hand, you are essentially turning it into a small generator and thus damaging it.
A damaged stepper motor is easy to detect. Damage to the stepper motor will cause it to lose precision and move farther with each step. If the main motor is damaged (which is more likely to happen), lines of print will be unevenly spaced. If the print head motor goes bad, characters will be scrunched together. If a stepper motor is damaged badly enough, it won't move at all in any direction; it may even make grinding or high‐pitched squealing noises. If any of these symptoms appear, it's time to replace one of these motors.
Stepper motors are usually expensive to replace—about half the cost of a new printer! Damage to them is easy to avoid; the biggest key is to not force them to move when the power is On.
An inkjet printer has many of the same types of parts as an impact printer. In this sense, it's almost as though the inkjet technology is simply an extension of the technology used in impact printers. The parts on an inkjet can be divided into the following four categories:
Perhaps the most obvious difference between inkjet and impact printers is that impact printers often use tractor‐feed paper, whereas inkjets use normal paper. The differences don't end there, though. Inkjet printers work by spraying ink (often in the form of a bubble) onto a page. The pattern of the bubbles forms images on the paper.
Inkjet printers are the most common type of printer found in homes because they are inexpensive and produce good‐quality images. For this reason, you need to understand the most common problems with these printers so that your company can service them effectively. Let's take a look at some of the most common problems with inkjet printers and their solutions.
The majority of inkjet printer problems are quality problems. Ninety‐nine percent of these can be traced to a faulty ink cartridge. With most inkjet printers, the ink cartridge contains the print head and the ink. The major problem with this assembly can be described by “If you don't use it, you lose it.” The ink will dry out in the small nozzles and block them if they are not used at least once every week or two.
An example of a quality problem is when you have thin, blank lines present in every line of text on the page. This is caused by a plugged hole in at least one of the small, pinhole ink nozzles in the print cartridge. Another common problem is faded printing. Replacing the ink cartridge generally solves these issues.
If an ink cartridge becomes damaged or develops a hole, it can put too much ink on the page and the letters will smear. Again, the solution is to replace the ink cartridge. (You should be aware, however, that a very small amount of smearing is normal if the pages are laid on top of each other immediately after printing.)
One final print quality problem that does not directly involve the ink cartridge occurs when the print quickly goes from dark to light and then prints nothing. As previously mentioned, ink cartridges dry out if not used. That's why the manufacturers include a small suction pump inside the printer that primes the ink cartridge before each print cycle. If this priming pump is broken or malfunctioning, this problem will manifest itself and the pump will need to be replaced.
After you install a new cartridge into many inkjet printers, the print heads in that cartridge must be aligned. Print head alignment is the process by which the print head is calibrated for use. A special utility that comes with the printer software is used to do this. Sometimes it's run from the printer itself and other times from the computer the printer is installed on. They vary a bit in how they work. For example, one utility might have the printer print several vertical and horizontal lines with numbers next to them. The utility then displays a screen and asks you to choose the horizontal and vertical lines that are the most “in line.” Once you enter the numbers, the software understands whether the print head (s) are out of alignment, which direction, and by how much. The software then makes slight modifications to the print driver software to tell it how much to offset when printing. Other calibration software will print a pattern and then ask you to put the newly printed page on the scanner portion of the printer. It will then scan the pattern to make sure that the heads are properly aligned. Occasionally, alignment must be done several times to get the images to align properly.
Sometimes when you print a color document, the colors might not be the same colors that you expected based on what you saw on the screen. This is called an incorrect chroma display. A few different issues could cause this problem. First, ink could be bleeding from adjacent areas of the picture, causing the color to be off. A leaking cartridge can cause this, as can using the wrong type of paper for your printer.
If you know that you're using the right paper, try cleaning the print cartridges using the software utility that should have been included with the printer software. Once you do that, print a test page to confirm that the colors are correct. On most color printers, the test page will print colors in a pattern from left to right that mirrors the way the ink cartridges are installed. That brings us to our second potential problem: the ink cartridges are installed in the wrong spot. (This is for printers with multiple color ink cartridges.) That should be easy to check. Obviously, if that's the problem, put the color cartridges where they're supposed to be.
Third, if the ink that comes out of the cartridge doesn't match the label on the cartridge, try the self‐cleaning utility. If that doesn't help, replace the cartridge. Finally, if one of the colors doesn't come out at all and self‐cleaning doesn't help, just replace the cartridge.
Somewhat related to color problems is speckling on printed pages. This is where the pages have random dots of ink or other material on them as they print. This is most often caused by stuff like paper dust or residue from envelopes, staples, or glue getting into the machinery. Cleaning the printer and blowing it out with compressed air should clear up any speckling. It's possible, but less likely, that speckles could be caused by a leaky ink cartridge.
Inkjet printers have pretty simple paper paths, so paper jams due to obstructions are less likely than they are on impact printers. They are still possible, however, so an obstruction shouldn't be overlooked as a possible cause of jamming.
Paper jams in inkjet printers are usually due to one of two things:
The pickup roller usually has one or two D‐shaped rollers mounted on a rotating shaft. When the shaft rotates, one edge of the D roller rubs against the paper, pushing it into the printer. When the roller gets worn, it gets smooth and doesn't exert enough friction against the paper to push it into the printer.
If the paper used in the printer is too smooth, it can cause the same problem. Pickup rollers use friction, and smooth paper doesn't offer much friction. If the paper is too rough, on the other hand, it acts like sandpaper on the rollers, wearing them smooth. Here's a rule of thumb for paper smoothness: paper slightly smoother than a new dollar bill will work fine.
You will normally see one of two paper‐feeding options on an inkjet printer. The first is that the paper is stored in a paper tray on the front of the printer. The second, which is more common on smaller and cheaper models, is for the paper to be fed in vertically from the back of the printer in a paper feeder. Both types may also have manual feed or envelope feed options.
Regardless of the feed style, the printer will have a paper‐feed mechanism, which picks up the paper and feeds it into the printer. Inside the paper‐feed mechanism are pickup rollers, which are small rubber rollers that rub up against the paper and feed it into the printer. They press up against small rubber or cork patches known as separation pads. These help to keep the rest of the paper in the tray so that only one sheet is picked up at a time. A pickup stepper motor turns the pickup rollers.
If your printer fails to pick up paper, it could indicate that the pickup rollers are too worn. If your printer is always picking up multiple sheets of paper, it could be a couple of things, such as problems with the separation pads or your paper being too “sticky,” damp, or rough. Some printers that use vertical paper feeders have a lever with which you can adjust the amount of tension between the pickup rollers and the separation pads. If your printer is consistently pulling multiple sheets of paper, you might want to try to increase the tension using this lever.
The final component is the paper‐feed sensor. This sensor is designed to tell the printer when it's out of paper, and it rarely fails. When it does, the printer will refuse to print because it thinks it is out of paper. Cleaning the sensor might help, but if not, you should replace the printer.
Inkjet printers use stepper motors, just like impact printers. On an inkjet, the print head carriage is the component containing the print head that moves back and forth. A carriage stepper motor and an attached belt (the carriage belt) are responsible for the movement. The print head carriage stays horizontally stable by resting on a metal stabilizer bar. Another stepper motor is responsible for advancing the paper.
Stepper motor problems on an inkjet printer will look similar to the ones on an impact printer. That is, if the main motor is damaged, lines of print will be unevenly spaced, and if the print head motor goes bad, characters will be scrunched together. A lot of damage may cause the stepper motor to not move at all and possibly make grinding or high‐pitched squealing noises. If any of these symptoms appear, it's time to replace one of these motors. As with impact printers, stepper motors can be expensive. It may make more economical sense to replace the printer.
Inkjet printers have internal power circuits that convert the electricity from the outlet into voltages that the printer can use—typically, 12V and 5V. The specific device that does this is called the transformer. If the transformer fails, the printer will not power up. If this happens, it's time to get a new printer.
The process that laser printers use to print, called the electrophotographic (EP) imaging process, is the most complex process of all commonly used printers. You should have already memorized the seven‐step EP process for the 220‐1101 A+ exam, but perhaps you've forgotten a bit. Table 12.4 provides a short description of what happens in each step.
Step | Action |
---|---|
Processing | The page to be printed gets rendered, one horizontal strip at a time. The image is stored in memory for printing. |
Charging | The charging corona gets a high voltage from the high‐voltage power supply (HVPS). It uses the voltage to apply a strong uniform negative charge (–600VDC) to the photosensitive drum. |
Exposing | The laser scans the drum. Wherever it touches the drum, the charge is reduced from –600VDC to around –100VDC. The pattern formed on the drum will be the image that is printed. |
Developing | The developing roller acquires a –600VDC charge from the HVPS and picks up toner, which gets the same –600VDC charge. As the developing toner rolls by the photosensitive drum, the toner is attracted to the lesser‐charged (–100VDC) areas on the photosensitive drum and sticks to it in those areas. |
Transferring | The charging corona wire or roller acquires a strong positive charge (+600VDC) and transfers it to the paper. As the photosensitive drum with ink on it rolls by, the ink is attracted to the paper. |
Fusing | The 350°F fuser roller melts the toner paper and the rubberized pressure roller presses the melted toner into the paper, making the image permanent. |
Cleaning | A rubber blade scrapes any remaining toner off the drum, and a fluorescent lamp discharges any remaining charge on the photosensitive drum. |
TABLE 12.4 The EP imaging process
Looking at the steps involved in laser printing, it's pretty easy to tell that laser printers are the most complex printers that we have discussed. The good news, though, is that most laser printer problems are easily identifiable and have specific fixes. Let's discuss the most common laser and page printer problems and their solutions.
If you turn on your laser printer and it doesn't respond normally, there could be a problem with the power it's receiving. Of course, the first thing to do is to ensure that it's plugged in.
A laser printer's DC power supply provides three different DC voltages to printer components. These can be checked at a power interface labeled J210, which is a 20‐pin female interface. Pin 1 will be in the lower‐left corner, and the pins along the bottom will all be odd numbers, increasing from left to right.
Using the multimeter, you should find the following voltages:
If none of the voltages are reading properly, then you probably need to replace the fuse in the DC power supply. If one or more (but not all) of the voltages aren't reading properly, then the first thing to do is to remove all optional hardware in the printer (including memory) and test again. If the readings are still bad, then you likely need to replace the DC power supply.
You can connect many laser printers directly to your network by using a network cable (such as Category 5, 5e, 6, or 6a) or by using a wireless network adapter with the printer. In cases like these, the printer acts as its own print server (typically, print server software is built into the printer), and it can speed up printing because you don't have a separate print server translating and then sending the directions to the printer.
For printers such as these, no connectivity can be a sign of improperly configured IP settings, such as the IP address. While each printer is somewhat different, you can manually configure most laser printers' IP settings a number of ways:
You can also configure most IP printers to obtain an IP address automatically from a Dynamic Host Configuration Protocol (DHCP) server. When the printer is powered up, it will contact the server to get its IP configuration information, just like any other client on the network. While this may be convenient, it's usually not a good idea to assign dynamic IP addresses to printers. Client computers will have their printer mapped to a specific IP address; if that address is changed, you will have a lot of people complaining about no connectivity. If you are using the DHCP server to manage all of your network's IP addresses, be sure to reserve a static address for the printers.
You tell your computer to print, but nothing comes out of the printer. This problem is probably the most challenging to solve because several different things could cause it. Are you the only one affected by the problem, or are others having the same issue? Is the printer plugged in, powered on, and online? As with any troubleshooting, check your connections first.
Sometimes when nothing prints, you get a clue as to what the problem is. The printer may give you an “out of memory” error or something similar. Another possibility is that the printer will say “processing data” (or something similar) on its LCD display and nothing will print. It's likely that the printer has run out of memory while trying to process the print job. If your printer is exhibiting these symptoms, it's best to power the printer off and then power it back on.
Laser printers today run at copier speeds. Because of this, their most common problem is paper jams. Paper can get jammed in a printer for several reasons. First, feed jams happen when the paper‐feed rollers get worn (similar to feed jams in inkjet printers). The solution to this problem is easy: replace the worn rollers.
Another cause of feed jams is related to the drive gear of the pickup roller. The drive gear (or clutch) may be broken or have teeth missing. Again, the solution is to replace it. To determine if the problem is a broken gear or worn rollers, print a test page, but leave the paper tray out. Look into the paper‐feed opening with a flashlight and see if the paper pickup roller(s) are turning evenly and don't skip. If they turn evenly, the problem is probably worn rollers.
Worn exit rollers can also cause paper jams. These rollers guide the paper out of the printer into the paper‐receiving tray. If they are worn or damaged, the paper may catch on its way out of the printer. These types of jams are characterized by a paper jam that occurs just as the paper is getting to the exit rollers. If the paper jams, open the rear door and see where the paper is located. If the paper is very close to the exit rollers, they are probably the problem.
The solution is to replace all the exit rollers. You must replace them all at the same time because even one worn exit roller can cause the paper to jam. Besides, they're inexpensive. Don't skimp on these parts if you need to have them replaced.
Paper jams can also be the fault of the paper. If your printer consistently tries to feed multiple pages into the printer, the paper isn't dry enough. If you live in an area with high humidity, this could be a problem. We've heard some solutions that are pretty far out but work (like keeping the paper in a Tupperware‐type airtight container or microwaving it to remove moisture). The best all‐around solution, however, is humidity control and keeping the paper wrapped until it's needed. Keep the humidity around 50 percent or lower (but above 25 percent if you can, in order to avoid problems with electrostatic discharge).
Finally, a grounded metal strip called the static‐charge eliminator strip inside the printer drains the transfer corona charge away from the paper after it has been used to transfer toner from the EP cartridge. If that strip is missing, broken, or damaged, the charge will remain on the paper and may cause it to stick to the EP cartridge, causing a jam. If the paper jams after reaching the transfer corona assembly, this may be the cause.
It's really annoying to print a 10‐page contract and receive 10 pages of blank paper from the printer. Blank pages are a somewhat common occurrence in laser printers. Somehow, the toner isn't being put on the paper. There are three major causes of blank pages:
Another issue that crops up rather often is the problem of using refilled or reconditioned toner cartridges. During their recycling process, these cartridges may be filled with the wrong kind of toner (for example, one with an incorrect composition). This can cause toner to be repelled from the EP drum instead of being attracted to it. Thus, there's no toner on the page because there was no toner on the EP drum to begin with. The solution once again is to replace the toner cartridge with the type recommended by the manufacturer.
A third problem related to toner cartridges happens when someone installs a new toner cartridge and forgets to remove the sealing tape that is present to keep the toner in the cartridge during shipping. The solution to this problem is as easy as it is obvious: remove the toner cartridge from the printer, remove the sealing tape, and reinstall the cartridge.
Transfer Corona Assembly The second cause of the blank‐page problem is a damaged or missing transfer corona wire or damaged transfer corona roller. If a wire is lost or damaged, the developed image won't transfer from the EP drum to the paper; thus, no image appears on the printout. To determine if this is causing your problem, do the first half of the self‐test (described later in this chapter in the “Self‐Tests” section). If there is an image on the drum but not on the paper, you know that the transfer corona assembly isn't doing its job.
To check if the transfer corona assembly is causing the problem, open the cover and examine the wire (or roller, if your printer uses one). The corona wire is hard to see, so you may need a flashlight. You will know if it's broken or missing just by looking at it. (It will either be in pieces or just not be there.) If it's not broken or missing, the problem may be related to the high‐voltage power supply.
The transfer corona wire (or roller) is a relatively inexpensive part. You can easily replace it by removing two screws and having some patience.
Only slightly more annoying than 10 blank pages are 10 black pages. This happens when the charging unit (the charging corona wire or charging corona roller) in the toner cartridge malfunctions and fails to place a charge on the EP drum. Because the drum is grounded, it has no charge. Anything with a charge (like toner) will stick to it. As the drum rotates, all of the toner is transferred to the page and a black page is formed.
This problem wastes quite a bit of toner, but it can be fixed easily. The solution (again) is to replace the toner cartridge with a known, good, manufacturer‐recommended one. If that doesn't solve the problem, then the HVPS is at fault. (It's not providing the high voltage that the charging corona needs to function.)
Repetitive marks occur frequently in heavily used (as well as older) laser printers. Toner spilled inside the printer may be causing the problem. It can also be caused by a crack or chip in the EP drum (this mainly happens with recycled cartridges), which can accumulate toner. In both cases, some of the toner gets stuck onto one of the rollers. Once this happens, every time the roller rotates and touches a piece of paper, it leaves toner smudges spaced a roller circumference apart.
The solution is relatively simple: clean or replace the offending roller. To help you figure out which roller is causing the problem, the service manuals contain a chart like the one shown in Figure 12.17. (Some larger printers also have the roller layout printed inside the service door.) To use the chart, place the printed page next to it. Align the first occurrence of the smudge with the top arrow. The next smudge will line up with one of the other arrows. The arrow it lines up with tells you which roller is causing the problem.
FIGURE 12.17 Laser printer roller circumference chart
Vertical white lines running down all or part of the page are a relatively common problem on older printers, especially ones that don't see much maintenance. Foreign matter (more than likely toner) caught on the transfer corona wire causes this. The dirty spots keep the toner from being transmitted to the paper (at those locations, that is), with the result that streaks form as the paper progresses past the transfer corona wire.
The solution is to clean the corona wires. Many laser printers contain a small corona wire brush to help with this procedure. It's usually a small, green‐handled brush located near the transfer corona wire. To use it, remove the toner cartridge and run the brush in the charging corona groove on top of the toner cartridge. Replace the cartridge, and then use the brush to remove any foreign deposits on the transfer corona. Be sure to put it back in its holder when you're finished.
A groove or scratch in the EP drum can cause the problem of vertical black lines running down all or part of the page. Because a scratch is lower than the surface, it doesn't receive as much (if any) of a charge as the other areas. The result is that toner sticks to it as though it were discharged. The groove may go around the circumference of the drum, so the line may go all the way down the page.
Another possible cause of vertical black lines is a dirty charging corona wire. A dirty charging corona wire prevents a sufficient charge from being placed on the EP drum. Because the charge on the EP drum is almost zero, toner sticks to the areas that correspond to the dirty areas on the charging corona.
The solution to the first problem is, as always, to replace the toner cartridge (or EP drum, if your printer uses a separate EP drum and toner). You can also solve the second problem with a new toner cartridge, although that would be an extreme solution. It's easier to clean the charging corona with the brush supplied with the cartridge.
If you can pick up a sheet from a laser printer, run your thumb across it, and have the image come off on your thumb, then you have a fuser problem. The fuser isn't heating the toner and fusing it into the paper. This could be caused by a number of things—but all of them can be handled by a fuser replacement. For example, if the halogen light inside the heating roller has burned out, that would cause the problem. The solution is to replace the fuser. The fuser can be replaced with a rebuilt unit, if you prefer. Rebuilt fusers are almost as good as new ones, and some even come with guarantees. Plus, they cost less.
A similar problem occurs when small areas of smudging repeat themselves down the page. Dents or cold spots in the fuser heat roller cause this problem. The only solution is to replace either the fuser assembly or the heat roller.
Ghosting (or echo images) is what you have when you can see faint images of previously printed pages on the current page. This is caused by one of two things: a broken cleaning blade or bad erasure lamps. A broken cleaning blade causes old toner to build up on the EP drum and consequently present itself in the next printed image. If the erasure lamps are bad, then the previous electrostatic discharges aren't completely wiped away. When the EP drum rotates toward the developing roller, some toner sticks to the slightly discharged areas.
If the problem is caused by a broken cleaner blade, you can replace the toner cartridge. If it's caused by bad erasure lamps, you'll need to replace them. Because the toner cartridge is the least expensive cure, you should try that first. Usually, replacing the toner cartridge will solve the ghosting problem. If it doesn't, you will have to replace the erasure lamps.
This has happened to everyone at least once. You print a one‐page letter, but instead of the letter you have 10 pages of what looks like garbage (or garbled characters) or many more pages with one character per page come out of the printer. This problem comes from one of two different sources:
Formatter Board The other cause of several pages of garbage being printed is a bad formatter board. This circuit board turns the information the printer receives from the computer into commands for the various components in the printer. Usually, problems with the formatter board produce wavy lines of print or random patterns of dots on the page.
Replacing the formatter board in a laser printer is relatively easy. Usually, this board is installed under the printer and can be removed by loosening two screws and pulling it out. Typically, replacing the formatter board also replaces the printer interface, which is another possible source of garbage printouts.
Many laser printers today are multifunction devices that include copying and scanning. Some devices come with finishers, which add touches like collating, stapling papers together, or hole punching output. If one of those functions is not working properly, it's an issue with the finisher. More often than not, it's a simple cleaning that takes care of the issue. For example, if documents are not being stapled, it could be a jam in the stapling mechanism that needs to be cleared. Or, of course, the printer may be out of staples.
Now that we've defined some of the possible sources of problems with laser printers, let's discuss a few of the testing procedures that you use with them. We'll discuss HP LaserJet laser printers because they are the most popular brand of laser printer, but the topics covered here apply to other brands of laser printers as well.
We'll look at two ways to troubleshoot laser printers: self‐tests and error codes (for laser printers with LCD displays).
You can perform three tests to narrow down which assembly is causing the problem: the engine self‐test, the engine half self‐test, and the secret self‐test. These tests, which the printer runs on its own when directed by the user, are internal diagnostics for printers, and they are included with most laser printers.
FIGURE 12.18 Print engine self‐test button location. The location will vary on different printers.
In addition to the self‐tests, you have another tool for troubleshooting HP laser printers. Error codes are a way for the LaserJet to tell the user (and a service technician) what's wrong. Table 12.5 details some of the most common codes displayed on an HP LaserJet.
Message | Description |
---|---|
00 READY | The printer is in standby mode and ready to print. |
02 WARM UP | The fuser is being warmed up before the
00 READY state. |
04 SELF TEST or 05 SELF TEST | A full self‐test has been initiated from the front panel. |
11 PAPER OUT | The paper tray sensor is reporting that there is no paper in the paper tray. The printer will not print as long as this error exists. |
13 PAPER JAM | A piece of paper is caught in the paper path. To fix this problem, open the cover and clear the jam (including all pieces of paper causing the jam). Close the cover to resume printing. The printer will not print as long as this error exists. |
14 NO EP CART | There is no EP cartridge (toner cartridge) installed in the printer. The printer will not print as long as this error exists. |
15 ENGINE TEST | An engine self‐test is in progress. |
16 TONER LOW | The toner cartridge is almost out of toner. Replacement will be necessary soon. |
50 SERVICE | A fuser error has occurred. This problem is most commonly caused by fuser lamp failure. Power off the printer, and replace the fuser to solve the problem. The printer will not print as long as this error exists. |
51 ERROR | There is a laser‐scanning assembly problem. Test and replace, if necessary. The printer will not print as long as this error exists. |
52 ERROR | The scanner motor in the laser‐scanning assembly is malfunctioning. Test and replace as per the service manual. The printer will not print as long as this error exists. |
55 ERROR | There is a communication problem between the formatter and the DC controller. Test and replace as per the service manual. The printer will not print as long as this error exists. |
TABLE 12.5 HP LaserJet error messages
Printer technicians usually use a set of troubleshooting steps to help them solve HP LaserJet printing problems. Let's detail each of them to bring our discussion of laser printer troubleshooting to a close:
00 READY
.Most people know how to send a job to the printer. Clicking File and then Print, or pressing Ctrl+P on your keyboard, generally does the trick. But once the job gets sent to the printer, what do you do if it doesn't print?
When you send a job to the printer, that print job ends up in a line with all other documents sent to that printer. A series of print jobs waiting to use the printer is called the print queue. In most cases, the printer will print jobs on a first‐come, first‐served basis. (There are exceptions if you've enabled printing priorities in Printer Properties.) Once you send the job to the printer in Windows, a small printer icon will appear in the notification area in the lower‐right corner of your desktop, near the clock. By double‐clicking it (or by right‐clicking it and selecting the printer name), you will end up looking at the jobs in the print queue, like the one shown in Figure 12.19.
FIGURE 12.19 Print jobs in the print queue in Windows
In Figure 12.19, you can see that the first document submitted (at the bottom of the list) has an error, which may explain why it hasn't printed. All the other documents in the queue are blocked until the job with the error is cleared. You can clear it one of two ways. Either right‐click the document and choose Cancel, or from the Document menu, shown in Figure 12.20, choose Cancel.
FIGURE 12.20 Printer Document menu in Windows
Note that from the menu shown in Figure 12.20, you can pause, resume, restart, and cancel print jobs as well as see properties of the selected print job. If you wanted to pause or cancel all jobs going to a printer, you would do that from the Printer menu, as shown in Figure 12.21.
FIGURE 12.21 Printer menu in Windows
Once you have cleared the print job causing the problem, the next job will move to the top of the queue. It should show its status as Printing, like the one shown in Figure 12.22. But what if it shows that it's printing but it still isn't working? (We're assuming that the printer is powered on, connected properly, and online.) It could be a problem with the print spooler.
FIGURE 12.22 Print job printing correctly
The print spooler is a service that formats print jobs in a language that the printer understands. Think of it as a holding area where the print jobs are prepared for the printer. In Windows, the spooler is started automatically when Windows loads.
If jobs aren't printing and there's no apparent reason why, it could be that the print spooler has stalled. To fix the problem, you need to stop and restart the print spooler. Exercise 12.2 walks you through stopping and restarting the spooler in Windows 10.
If you have a different version of Windows, the steps to stop and restart the spooler are the same as in Exercise 12.2; the only difference might be in how you get to Computer Management.
If your printer isn't spitting out print jobs, it may be a good idea to print a test page to see if that works. The test page information is stored in the printer's memory, so there's no formatting or translating of jobs required. It's simply a test to make sure that your printer hears your computer.
When you install a printer, one of the last questions you're asked is whether to print a test page. If there's any question, go ahead and do it. If the printer is already installed, you can print a test page from the printer's Properties window (right‐click the printer and choose Printer Properties). Just click the Print Test Page button, and it should work. If nothing happens, double‐check your connections and stop and restart the print spooler. If garbage prints, there is likely a problem with the printer or the print driver.
Printers can print to paper of various sizes, as well as multiple orientations. If a print job is not coming out correctly, such as the output is squeezed into a smaller area than a regular sheet of 8.5 × 11 paper, it may be that the paper size is improperly set. Or, if the right side or bottom is cut off, it could be a page orientation issue. Both options are configured in the Printing preferences settings for the printer. To change the settings in Windows, open the Printers & Scanners app. Highlight the printer, choose Manage, and then select Printing Preferences (Figure 12.24).
In Figure 12.24, there is an option for paper sizes. (Note that this tab will look different for different printer models.) Letter paper is 8.5 × 11 inches, which is common in the United States. In most of Europe, the standard is called A4, which is 210 × 297 mm, or about 8.3 × 11.7 inches. Setting a printer to print to A4 when it should be Letter, or vice versa, can result in stretched or scrunched printing, or unusually larger than normal margins, depending on the printer. This tab also has settings for paper source. Some printers have multiple paper trays. They can be configured such that one has letter size paper and another has legal (8.5 × 14 inches) size paper. When the user prints, they need to specify letter or legal, and the printer will pull paper from the correct tray.
FIGURE 12.24 Paper/Quality options for a printer
On this specific printer, the paper orientation is handled on the Finishing tab, shown in Figure 12.25. The two options are portrait (taller than it is wide) or landscape (wider than it is tall). Note as well that this printer has options for printing on both sides, a booklet layout, and number of pages per sheet. Again, different printers will have different options—you may just need to click on a few different tabs to find the setting you're looking for, if it's available on that printer.
FIGURE 12.25 Finishing tab, including orientation
As a technician, you are going to be called on to solve a variety of issues, including hardware, software, and networking problems. Networking problems can sometimes be the most tricky to solve, considering that it could be either a software or a hardware problem or a combination of the two causing your connectivity issue.
The first adage for troubleshooting any hardware problem is to check your connections. That holds true for networking as well, but then your troubleshooting will need to go far deeper than that in a hurry. As with troubleshooting anything else, follow a logical procedure and be sure to document your work.
Nearly all the issues tested by CompTIA have something to do with connectivity, which makes sense because that's what networking is all about. Connectivity issues, when not caused by hardware, are generally the result of a messed‐up configuration. And because the most common protocol in use today, TCP/IP, has a lot of configuration options, you can imagine how easy it is to configure something incorrectly.
In the following sections, we'll look at connectivity issues and how to resolve them. But first, we will introduce you to several hardware tools and software commands you should know about, as they are essential to network troubleshooting.
Networks are so specialized that they have their own set of troubleshooting tools. This includes hardware and software tools. Knowing how to use them can make the difference between a quick fix and a lengthy and frustrating troubleshooting session. The CompTIA A+ exams will test you on both hardware and software tools. Hardware tools are covered in exam 220‐1101, whereas the software commands are part of exam 220‐1102 objective 1.2. Because both are critical to resolving network issues, we will cover them all here. We'll then review the software tools again in Chapter 15.
We covered several different types of cables and their properties in Chapter 5, “Networking Fundamentals.” Here, we will look at some tools that can be used to make or test network cables, as well as a few tools to troubleshoot network connectivity issues.
FIGURE 12.26 A basic multimeter
FIGURE 12.27 A UTP crimper
FIGURE 12.28 An RF Explorer handheld Wi‐Fi analyzer
FIGURE 12.29 A toner probe
FIGURE 12.30 A punch‐down tool
FIGURE 12.31 A TRENDnet cable tester
FIGURE 12.32 An Ethernet loopback plug
FIGURE 12.33 A Dualcomm network tap
Troubleshooting networks often involves using a combination of
hardware tools and software commands. Usually, the software commands are
easier to deal with because you don't need to dig around physically in a
mess of wires to figure out what's going on. The downside to the
software commands is that there can be a number of options that you need
to memorize. In the following sections, we'll cover the following
networking command‐line tools, which are all helpful utilities:
ipconfig
/ip
, ping
,
hostname
, tracert
, netstat
,
net
, and nslookup
.
With Windows‐based operating systems, you can determine the network
settings on the client's network interface cards, as well as any that a
DHCP server has leased to your computer, by typing the following at a
command prompt:
ipconfig /all
.
ipconfig /all
also gives you full details on the
duration of your current lease. You can verify whether a DHCP client has
connectivity to a DHCP server by releasing the client's IP address and
then attempting to lease an IP address. You can conduct this test by
typing the following sequence of commands from the DHCP client at a
command prompt:
ipconfig /release
ipconfig /renew
ipconfig
is one of the first tools to use when
experiencing problems accessing resources because it will show you
whether an address has been issued to the machine. If the address
displayed falls within the
169.254.
x.x
category, this means that
the client was unable to reach the DHCP server and has defaulted to
Automatic Private IP Addressing (APIPA), which will prevent the network
card from communicating outside its subnet, if not altogether. Table
12.6 lists useful switches for ipconfig
.
Switch | Purpose |
---|---|
/all |
Shows full configuration information |
/release |
Releases the IP address if you are getting addresses from a Dynamic Host Configuration Protocol (DHCP) server |
/release6 |
Releases the IPv6 addresses |
/renew |
Obtains a new IP address from a DHCP server |
/renew6 |
Obtains a new IPv6 address from a DHCP server |
/flushdns |
Flushes the Domain Name System (DNS) server's name resolver cache |
TABLE 12.6
ipconfig
switches
Figure 12.34 shows the output from
ipconfig
, and Figure 12.35 shows the output from
ipconfig /all
for one network adapter.
FIGURE 12.34 Output from
ipconfig
FIGURE 12.35 Output from
ipconfig /all
In Exercise 12.3, you will renew an IP address on a Windows 10 system within the graphical interface.
While Windows provides this interface to troubleshoot connection problems, some administrators still prefer the reliability of a command‐line interface. Exercise 12.4 shows you how to perform a similar action using the command line.
The ping
command is one of the most useful commands in
the TCP/IP protocol. It sends a series of packets to another system,
which in turn sends back a response. This utility can be extremely
useful for troubleshooting problems with remote hosts. Pings are also
called ICMP echo requests/replies because they use Internet Control
Message Protocol (ICMP).
The ping
command indicates whether the host can be
reached and how long it took for the host to send a return packet.
Across WAN links, the time value will be much larger than across healthy
LAN links.
The syntax for ping
is ping
hostname
or ping
IP address
. Figure
12.37 shows what a ping should look like.
FIGURE 12.37 A successful ping
As you can see, by pinging with the hostname, we found the host's IP address thanks to DNS. The time is how long in milliseconds it took to receive the response. On a LAN, you want this to be 10 milliseconds (ms) or less, but 60ms to 65ms for an Internet ping isn't too bad.
The ping
command has several options, which you can see
by typing ping /?
at the
command prompt. Table 12.7 lists some useful
options.
Option | Function |
---|---|
‐t |
Persistent ping. Will ping the remote host until stopped by the client (by using Ctrl+C). |
‐n count |
Specifies the number of echo requests to send. |
‐l size |
Specifies the packet size to send. |
ping ‐4 / ping ‐6 |
Uses either the IPv4 or IPv6 network, respectively. |
TABLE 12.7
ping
options
The hostname
command is a very simple one. It returns
the name of the host computer on which its executed. Figure
12.38 shows you the output.
FIGURE 12.38
hostname
output
The netstat
command is used to check out the inbound and
outbound TCP/IP connections on your machine. It can also be used to view
packet statistics, such as how many packets have been sent and received
and the number of errors.
When used without any options, the netstat
command
produces output similar to that shown in Figure
12.39, which shows all the outbound TCP/IP connections.
FIGURE 12.39 Output from
netstat
There are several useful command‐line options for
netstat
, as shown in Table 12.8.
Option | Function |
---|---|
‐a |
Displays all connections and listening ports. |
‐b |
Displays the executable involved in creating each connection or listening port. In some cases, well‐known executables host multiple independent components, and in these cases the sequence of components involved in creating the connection or listening port is displayed. In this case, the executable name is in brackets at the bottom. At the top is the components it called, in sequence, until TCP/IP was reached. Note that this option can be time consuming, and it will fail unless you have sufficient permissions. |
‐e |
Displays Ethernet statistics. This may be combined with
the ‐s option. |
‐f |
Displays fully qualified domain names (FQDNs) for foreign addresses. |
‐n |
Displays addresses and port numbers in numerical form. |
‐o |
Displays the owning process ID associated with each connection. |
‐p proto |
Shows connections for the protocol specified by
proto ; proto may be any of
the following: TCP, UDP, TCPv6, or UDPv6. If netstat is
used with the ‐s option to display per‐protocol statistics,
proto may be IP, IPv6, ICMP, ICMPv6, TCP, TCPv6,
UDP, or UDPv6. |
‐r |
Displays the routing table. |
‐s |
Displays per‐protocol statistics. By default,
statistics are shown for IP, IPv6, ICMP, ICMPv6, TCP, TCPv6, UDP, and
UDPv6; the ‐p option may be used to specify a subset of the
default. |
TABLE 12.8
netstat
options
One of the key things that must take place to use TCP/IP effectively is that a hostname must resolve to an IP address—an action usually performed by a DNS server.
nslookup
is a command that enables you to verify entries
on a DNS server. You can use the nslookup
command in two
modes: interactive and noninteractive. In interactive mode, you start a
session with the DNS server in which you can make several requests. In
noninteractive mode, you specify a command that makes a single query of
the DNS server. If you want to make another query, you must type another
noninteractive command.
To start nslookup
in interactive mode (which is what
most admins use because it allows them to make multiple requests without
typing nslookup
several times), type
nslookup
at the command
prompt and press Enter. You will receive a greater than prompt
(>
) and you can then type the command that you want to
run. You can also type
help
or
?
to bring up the list of
possible commands, as shown in Figure 12.40. To exit
nslookup
and return to a command prompt, type
exit
and press Enter.
To run nslookup
in noninteractive mode, you would use
the nslookup
command option you want to run at the command
prompt—for example,
nslookup /set timeout=<3>
or
nslookup /view:domain
.
Depending on the version of Windows you are using, net
can be one of the most powerful commands at your disposal. All Windows
versions include a net
command, but its capabilities differ
based on whether it is used on a server or workstation and the version
of the operating system.
FIGURE 12.40 Starting
nslookup
and using help
While always command line–based,
net
allows you to do almost anything that you want with the
operating system. Table 12.9 shows common
net
switches.
Switch | Purpose |
---|---|
net accounts |
To set account options (password age, length, and so on) |
net computer |
To add and delete computer accounts |
net config |
To see network‐related configuration |
net continue , net pause ,
net start , net statistics , and
net stop |
To control services |
net file |
To close open files |
net group and
net localgroup |
To create, delete, and change groups |
net help |
To see general help |
net helpmsg |
To see specific message help |
net name |
To see the name of the current machine and user |
net print |
To interact with print queues and print jobs |
net send |
To send a message to user(s) |
net session |
To see session statistics |
net share |
To create a share |
net time |
To set the time to that of another computer |
net use |
To connect to a share |
net user |
To add, delete, and see information about a user |
net view |
To see available resources |
TABLE 12.9
net
switches
These commands are invaluable troubleshooting aids when you cannot get the graphical interface to display properly. You can also use them when interacting with hidden ($) and administrative shares that do not appear within the graphical interface.
The net
command used with the share
parameter enables you to create shares from the command prompt, using
this syntax:
net share <share_name>=<drive_letter>:<path>
To share the C:\EVAN
directory as SALES
,
you would use the following command:
net share sales=c:/evan
You can use other parameters with
net share
to set other options. Table
12.10 summarizes the most commonly used parameters, and Exercise
12.5 will give you some experience with the net share
command.
Parameter | Purpose |
---|---|
/delete |
To stop sharing a folder |
/remark |
To add a comment for browsers |
/unlimited |
To set the user limit to Maximum Allowed |
/users |
To set a specific user limit |
TABLE 12.10
net share
parameters
The net /?
command is basically a catch‐all help request. It will instruct you to use the
net
command in which you are interested for more
information.
tracert
(trace route) is a Windows‐based command‐line
utility that enables you to verify the route to a remote host. Execute
the command tracert
hostname
, where
hostname
is the computer name or IP
address of the computer whose route you want to trace.
tracert
returns the different IP addresses the packet was
routed through to reach the final destination. The results also include
the number of hops needed to reach the destination. If you execute the
tracert
command without any options, you see a help file
that describes all the tracert
switches.
This utility determines the intermediary steps involved in communicating with another IP host. It provides a road map of all the routing an IP packet takes to get from host A to host B.
Timing information from tracert
can be useful for
detecting a malfunctioning or overloaded router. Figure
12.43 shows the output from tracert
. In addition to
tracert
, there are many graphical third‐party
network‐tracing utilities available on the market.
FIGURE 12.43 Output from
tracert
pathping
(path ping) combines the best of both worlds
from ping
and tracert
and is a favorite
command for many net admins. It acts much like a ping, but it also
traces the route to the destination and shows where, if anywhere, packet
loss occurs between the sending computer and the remote host. Said
differently, pathping
first traces the route to the
destination host, and then it pings each node between itself and the
destination. Figure 12.44 shows the output. A
similar command for Linux‐based computers is mtr
.
FIGURE 12.44
pathping
output
The top half of the output in Figure 12.44 looks just like
tracert
. The bottom half is where you see the ping times
and packet loss. In this example, there's a little packet loss at each
hop from hops 3 through 8. The loss here is low and not a concern. Table
12.11 lists frequently used switches.
Switch | Purpose |
---|---|
‐h number |
Defines the maximum number of hops to search. Useful if, for example, you just want to test connectivity to the ISP. |
‐n |
Does not resolve each hostname. This speeds up the results. |
‐p number |
Number of milliseconds to wait between pings. The default is 250. A good choice is 100. Again, speeds up results. |
‐q number |
Number of queries per hop. The default is 100, choosing fewer speeds it up. Around 10–20 is usually enough. |
‐w number |
Number of milliseconds to wait for each reply. The default is 3 seconds; 500 milliseconds is fine. Speeds up results. |
TABLE 12.11
pathping
switches
The whole purpose of using a network is to connect to other resources, right? So when networks don't work like they're supposed to, users tend to get a bit upset. The ubiquity of wireless networking has only made our jobs as technicians more complicated. In the following sections, we'll look at a variety of issues that you might run across and how to deal with them.
Let's start with the most dire situation: no connectivity. Taking a step back to look at the big picture, think about all the components that go into networking. On the client side, you need a network card and drivers, operating system, protocol, and the right configuration. Then you have a cable of some sort or a wireless connection. At the other end is a switch or wireless router. That device connects to other devices, and so forth. The point is, if someone is complaining of no connectivity, there could be one of several different things causing it. So start with the basics.
The most common issue that prevents network connectivity on a wired network is a bad or unplugged patch cable. Cleaning crews and the rollers on the bottoms of chairs are the most common threats to patch cables. In most cases, wall jacks are placed 4–10 feet away from the desktop. The patch cables are often lying exposed under the user's desk, and from time to time damage is done to the cable, or it's inadvertently snagged and unplugged. Tightly cinching the cable while tying it up out of the way is no better a solution. Slack must be left in the cable to allow for some amount of equipment movement and to avoid altering the electrical characteristics of the cable.
When you troubleshoot connectivity, start with the most rudimentary explanations first. Make sure that the patch cable is tightly plugged in, and then look at the card and check if any lights are on. If there are lights on, use the NIC's documentation to help troubleshoot. More often than not, shutting down the machine, unplugging the patch and power cables for a moment, and then reattaching them and rebooting the PC will fix an unresponsive NIC.
If you don't have any lights, you don't have a connection. It could be that the cable is bad or that it's not plugged in on the other side, or it could also be a problem with the NIC or the connectivity device on the other side. Is this the only computer having problems? If everyone else in the same area is having the same problem, that points to a central issue.
Most wireless network cards also have indicators on them that can help you troubleshoot. For example, a wireless card might have a connection light and an activity light, much like a wired network card. On one particular card we've used, the lights will alternate blinking if the card isn't attached to a network. Once it attaches, the connection light will be solid, and the link light will blink when it's busy. Other cards may operate in a slightly different manner, so be sure to consult your documentation.
If you don't have any lights, try reseating the wireless NIC. If you're using a USB wireless adapter, this is pretty easy. If it's inside your desktop, it will require a little surgery. If it's integrated into your laptop, you could have serious issues. Try rebooting first. If that doesn't help, see if you can use an expansion NIC and make that one light up.
Let's assume that you have lights and that no
one else is having a problem. (Yes, it's just you.) This means that the
network hardware is probably okay, so it's time to check the
configuration. Open a command prompt, type
ipconfig
, and press
Enter. You should get an IP address. (If it starts with
169.254.
x.x
, that's an
APIPA address. We'll talk about those in the “Limited or Local
Connectivity” section.) If you don't have a valid IP address, that's the
problem.
If you do have a valid IP address, it's time to see how far your
connectivity reaches. With your command prompt open, use the
ping
command to ping a known, remote working host. If that
doesn't work, start working backward. Can you ping the outside port of
your router? The inside port? A local host? (Some technicians recommend
pinging your loopback address first with ping 127.0.0.1
[or
ping ::1
on an IPv6 network] and then working your way out
to see where the connectivity ends. Either way is fine. The advantage to
starting with the loopback is that if it doesn't work, you know nothing
else will either.) Using this methodology, you'll be able to figure out
where your connectivity truly begins and ends.
In a way, limited connectivity problems are a bit of a blessing. You can immediately rule out client‐side hardware issues because they can connect to some resources. You just need to figure out why they can't connect to others. This is most likely caused by one of two things: a configuration issue or a connectivity device (such as a router) problem.
Check the local configuration first. Use ipconfig /all
to ensure that the computer's IP address, subnet mask, and default
gateway are all configured properly. After that, use the
ping
utility to see the range of connectivity. In
situations like this, it's also good to check with other users in the
area. Are they having the same connectivity issues? If so, it's more
likely to be a central problem rather than one with the client
computer.
As we talked about in Chapter 6,
“Introduction to TCP/IP,” Automatic Private IP Addressing (APIPA) is a
service that autoconfigures your network card with an IP address. APIPA
kicks in only if your computer is set to receive an IP address from the
Dynamic Host Configuration Protocol (DHCP) server and that server
doesn't respond. You can always tell an APIPA address because it will be
in the format of
169.254.
x.x
.
When you have an APIPA address, you will be able to communicate with other computers that also have an APIPA address but not with any other resources. The solution is to figure out why you're not getting an answer from the DHCP server and fix that problem.
Link local addresses are the IPv6 version of APIPA, and link local
addresses always start with fe80::
(they are in the
fe80::/10
range). They will work to communicate with
computers on a local network, but they will not work through a router.
If the only IP address that your computer has is a link local address,
you're not going to communicate outside of your network. The resolution
is the same as it is for APIPA.
Every host on a network needs to have a unique IP address. If two or more hosts have the same address, communication problems will occur. The good news is that nearly every operating system today will warn you if it detects an IP address conflict with your computer. The bad news is it won't fix it by itself.
The communication problems will vary. In some cases, the computer will seem nearly fine, with intermittent connectivity issues. In others, it will appear as if you have no connectivity.
The most common cause of this is if someone configures a computer with a static IP address that's part of the DHCP server's range. The DHCP server, not knowing that the address has been statically assigned somewhere, doles out the address and now there's a conflict. Rebooting the computer won't help, nor will releasing the address and getting a new lease from the DHCP server—it's just going to hand out the same address again because it doesn't know that there's a problem.
As the administrator, you need to track down the offending user. A common way to do this is to use a packet sniffer to look at network traffic and determine the computer name or MAC address associated with the IP address in question. Most administrators don't keep network maps of MAC addresses, but everyone should have a network map with hostnames. If not, it could be a long, tedious process to check everyone's computer to find the culprit.
Intermittent connectivity is when the network sometimes connects, but it's not consistently connected. Sometimes the connection will quickly disappear and reappear, and other times it will be disconnected for longer—minutes can seem like hours when this happens. This category of problems is the most difficult and frustrating to troubleshoot. Getting to why it sometimes behaves but other times does not isn't always easy. Under this heading, we're going to consider intermittent connectivity, slow network speeds, high latency, and related issues, because they are all pretty similar. Terms like “slow network speeds” are fairly straightforward, but not all terms in this section are. Let's define a few, and then get into more detail, because most of these problems have the same causes and resolutions:
On a wired network, if you run into slow speeds or intermittent connectivity, it's likely a load issue. There's too much traffic for the network to handle, and the network is bogging down. Solutions include adding a switch, replacing hubs with switches (if your network is using hubs in today's age, it's really time to upgrade), and even creating virtual LANs (VLANs) with switches. If the network infrastructure is old (for example, if it's running on Category 3 cable or you only have 10 Mbps switches), then it's definitely time for an upgrade. Remember that your network can only be as fast as its slowest component.
Other wired issues can include bad or poorly connected cables or faulty switch ports. Check to ensure that the cable is properly connected on both ends. Most cables today have a latch that holds them into place, but it's still a good idea to detach and reattach them for good measure. You can always try a different cable, as well, to see if that resolves the problem. If the cable seems to be fine, then perhaps try a different port on the switch or hub, if possible. Finally—and this is rare in today's networking—there could be a speed or duplex mismatch between the sending and receiving devices. Network adapters and connectivity devices can be set to limit the speeds, as a form of backward compatibility. If one device is set too slow, then another transmitting at full speed may have issues connecting with it. With duplex, most NICs are fully capable of sending and receiving at the same time. That's called full duplex. Some older cards were able to operate at only half duplex, meaning they could send or receive, but not both at the same time. Today, nearly all devices autodetect speed and duplex, so this is, as we said, a rare issue. If all else fails, though, it never hurts to check the configuration settings.
Again, the previous steps will fix most issues, but there a few more things to know about VoIP and port flapping:
FIGURE 12.45 Gigabit SFP
Dmitry Nosachev, CC BY‐SA 4.0 https://creativecommons.org/licenses/by-sa/4.0
,
via Wikimedia Commons
Wireless networks can get overloaded, too. It's recommended that no more than 30 or so client computers use one wireless access point (WAP) or wireless router. (Wi‐Fi 6 can handle more but should still be limited to 60–100 at most.) Any more than that can cause intermittent access problems. The most common reason that users on wireless networks experience any of these issues, though, is distance. The farther away from the WAP the user gets, the weaker the signal becomes. When the signal weakens, the transfer rates drop dramatically. For example, the signal from an 802.11ac (Wi‐Fi 5) wireless router has a maximum indoor range of about 35 meters (115 feet), barring any obstructions. At that distance, though, 802.11ac will support transfer rates of only about 50 Mbps—far less than the 1.3 Gbps the users think they're getting. External interference, such as from radio signals, microwaves, large motors, and fluorescent lights, as well as physical barriers such as concrete or steel, can greatly reduce the effective range. The solution is to move closer or install more access points. Depending on the configuration of your working environment, you could also consider adding a directional antenna to the WAP. The antenna will increase the distance the signal travels but only in a limited direction.
In this chapter, we discussed hardware and network troubleshooting. First, we looked at issues common to storage devices and RAID arrays, such as lights and sounds, devices not found, slow performance, S.M.A.R.T. errors, and optical drive problems. Then, we looked at video issues, such as input and image problems and how to resolve them. Next, we covered problems that are unique to mobile devices and laptop computers. Because of their compact nature, they have unique issues relating to heat and power, input and output, connectivity, and potential damage.
We followed that with a discussion on troubleshooting printers. Specifically, we discussed problems with three major classes of printers, including impact, inkjet, and laser, and then we talked about managing print jobs, the print spooler, printing a test page, and printer configuration options.
Finally, we ended the chapter with a section on troubleshooting issues that are specific to networking. We looked at tools and commands that you can use to troubleshoot network problems, and then finished with symptoms and fixes for a variety of connectivity problems.
ipconfig
, ping
, and tracert
commands are used for. Admittedly, these are specifically for
A+ exam 220‐1102, but know what they do. Both ipconfig
and
ping
are network troubleshooting commands. You can use
ipconfig
to view your computer's IP configuration and
ping
to test connectivity between two network hosts.
tracert
allows you to view the network path a packet takes
from the host to the destination.netstat
, net
, and nslookup
commands are used for. These commands need to be understood for
exam 220‐1102 as well. netstat
shows network statistics;
net
allows you to perform network‐management tasks, such as
sharing folders; and nslookup
allows you to query a DNS
server.The answers to the chapter review questions can be found in Appendix A.
ipconfig
to ensure that they are receiving the
right IP address from the DHCP server.ipconfig /refresh
ipconfig /renew
ifconfig /release
ifconfig /start
You will encounter performance‐based questions on the A+ exam. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
Your network users are sending print jobs to the printer, but they are stacking up in the queue and not printing. The printer appears to be online and has paper. How would you stop and restart the print spooler in Windows 10?
THE FOLLOWING CompTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
The previous chapters focused mainly on the hardware and physical elements of the computing environment. We looked at the physical components, or hardware, of personal computers, laptops, and mobile devices, as well as networking, printers, and troubleshooting procedures. That completes the coverage of the topics on the 220‐1101 exam. This chapter marks a departure from that.
In this chapter—and several to come—the focus is on operating systems (OSs). To be specific, the majority of information will be on the Microsoft Windows operating systems, which you must know well for the 220‐1102 certification exam. However, the 220‐1102 exam also requires basic knowledge of the macOS and Linux operating systems. These are covered in Chapter 16, “Working with macOS and Linux.”
Computers are pretty much useless without software. A piece of hardware might just as well be used as a paperweight or doorstop if you don't have an easy way to interface with it. Software provides that way. While there are many types of software, or programs, the most important one you'll ever deal with is the operating system. Operating systems have many different, complex functions, but two of them jump out as being critical: interfacing with the hardware and providing a platform on which other applications can run.
Here are three major distinctions of software about which you should be aware:
Output formatting
FIGURE 13.1 The operating system interacts with resources.
Once the OS has organized these basic resources, users can give the computer instructions through input devices (such as a keyboard or a mouse). Some of these commands are built into the OS, whereas others are issued through the use of applications. The OS becomes the center through which the system hardware, other software, and the user communicate; the rest of the components of the system work together through the OS, which coordinates their communication.
In the following sections, we'll look at some terms and concepts central to all operating systems. Then we'll move into specific discussions of Windows operating systems.
Before we get too far into our discussion of PC operating systems, it will be useful to define a few key terms. The following are some terms that you will come across as you study this chapter and work in the computer industry:
A 32‐bit operating system has the limitation of addressing only 4 GB of RAM. Most new computer systems on the market today come with a preinstalled 64‐bit operating system. The 4 GB limitation will present itself if you are re‐installing a computer system that has more than 4 GB of RAM and use a 32‐bit version of the operating system. You will find out quickly that you just downgraded the computer.
When we think of an operating system, Windows or macOS is probably the first that comes to mind. These operating systems comprise only one category of operating system. We use other operating systems, such as Android and Apple iOS, and don't even realize how much they have become part of our daily life. The CompTIA exam recognized these various operating systems and integrated them into the 220‐1102 exam.
An operating system category defines the use and function of both the operating system and the hardware. All operating systems fit into one of four different broad categories: server, workstation, mobile, or cloud‐based operating systems.
Now that you understand the various categories of operating systems you may encounter, we will explore the common operating systems you will find in your everyday work as an A+ technician.
An operating system life cycle begins as when an operating system is introduced and ends when the operating system is no longer supported. As a computer technician, you should pay close attention to an operating system's life cycle, because the end‐of‐life (EOL) date for an OS means that it will no longer receive updates.
When an operating system is considered end‐of‐life, newer features will not be added to the OS in the future. More importantly, security updates will no longer be offered, which will put your operating system and information at risk of compromise.
The network administrator should pay close attention to the dates an operating system is considered end‐of‐life. If the organization has a support contract for problems, the support of the operating system at the end of life could no longer be honored. Of course, the lack of security updates for a corporate network is more concerning, since the stability of an organization can be compromised.
It has been common practice for network administrators to skip versions of every other operating system. This is mainly attributed to the long‐term support for the current operating system and the amount of work required to upgrade an entire organization's operating systems. In addition to these factors, many hardware vendors support the current operating system plus the last release. Administrators can purchase a new laptop or PC and request the prior operating system. However, once the next operating system is released, the support for the old operating system will force an upgrade.
The following common terminology is used by operating system developers in relation to the life cycle of an OS:
In the upcoming chapters, we'll explore how to install and upgrade each of the operating systems that you need to know for the exam. However, the hardware requirements of the operating system that you are thinking of installing can prevent you from even considering these options. Before you can begin to install an OS, you must consider several items. You must perform the following tasks before you even start to think about undertaking the installation:
Let's begin our discussion by talking about hardware compatibility issues and the requirements for installing the various versions of Windows.
Before you can begin to install any version of an operating system, you must determine whether it supports the hardware that you will be using. That is, will the version of the operating system have problems running any of the device drivers for the hardware that you have?
To answer this question, operating system vendors have developed several versions of hardware compatibility lists (HCLs). An HCL is a list of all the hardware that works with the operating system. Microsoft published HCLs for many prior versions of Windows. Since the release of Windows 10, the HCL has disappeared completely and it is now the responsibility of the vendor to certify the compatibility of the hardware with Microsoft. However, many other operating system vendors still have a recommended HCL.
The point is, before installing an operating system, you should check all your computer's components against an HCL or the manufacturer of the component to ensure that they are compatible with the version of operating system you plan to install. If a product is not on the list, that does not mean it will not work; it merely means it has not been tested. The list represents tested software and hardware that vendors have stated are compatible, but it is by no means all‐inclusive.
In addition to general compatibility, it is important that your computer have enough “oomph” to run the version of an operating system that you plan to install. For that matter, it is important for your computer to have enough resources to run any software that you plan to use. Toward that end, Microsoft (as well as other software publishers) releases a list of both minimum and recommended hardware specifications that you should follow when installing Windows.
“Minimum specifications” are the absolute minimum requirements for hardware that your system should meet in order to install and run the OS version you have chosen. “Recommended hardware specifications” are what you should have in your system in order to realize usable performance. Always try to have the recommended hardware (or better) in your system. If you don't, you may have to upgrade your hardware before you upgrade your operating system if you're running anything beyond a minimal environment. Table 13.1 lists the minimum hardware specifications for Windows 10 and Windows 11. Note that in addition to these minimum requirements, the hardware chosen must be compatible with the selected version of Windows. Also be aware that additional hardware may be required if certain features are installed (for example, a fingerprint reader is required to use biometric logins). Windows 11 is only available in a 64‐bit version and requires a Trusted Platform Module 2.0 and UEFI for added security of the operating system and its applications.
Operating System | Windows 10 | Windows 10 | Windows 11 |
---|---|---|---|
Architecture | 32‐bit | 64‐bit | 64‐bit |
Processor | 1 GHz or faster processor or System on a Chip (SoC) | 1 GHz or faster processor or System on a Chip (SoC) | 1 GHz or faster processor with 2 or more cores or System on a Chip (SoC) |
Memory | 1 GB | 2 GB | 4 GB |
Free hard disk space | 16 GB | 32 GB | 64 GB |
Graphics card | Microsoft DirectX 9 or later graphics device with WDDM 1.0 driver | Microsoft DirectX 9 or later graphics device with WDDM 1.0 driver | Microsoft DirectX 12 or later graphics device with WDDM 2.0 driver |
Display | 800 × 600 | 800 × 600 | High definition 720p |
Additional hardware | N/A | N/A | UEFI & TPM 2.0 required |
TABLE 13.1 Windows 10 and Windows 11 minimum system requirements
If there is one thing to be learned from Table 13.1, it is that Microsoft is nothing if not optimistic. For your own sanity, though, we strongly suggest that you always take the minimum requirements with a grain of salt. They are, after all, minimum requirements. Even the recommended requirements should be considered minimum requirements. The bottom line is to make sure that you have a good margin between your system's performance and the minimum requirements listed. Always run Windows on more powerful hardware rather than less!
Other hardware—sound cards, network cards, modems, video cards, and so on—may or may not work with Windows. If the device is fairly recent, you can be relatively certain that it was built to work with the newest version of Windows. If it is older, however, you may need to find out who made the hardware and check their website to see if there are drivers available for the version of Windows that you are installing.
There's one more thing to consider when evaluating installation methods. Some methods work only if you're performing a clean installation, not an upgrade.
An operating system without any applications installed on it isn't very useful. Even back when Windows 1.0 was released 30 years ago, it came with applications, such as Minesweeper, a File Manager, and a simple word processor. In this section, we will explore installing applications for Windows 10/11. However, these concepts can be applied to any operating system.
Before installing an application, you must collect information on the requirements of the application to make sure that your system can satisfy the requirements. If your system cannot satisfy the requirements for the application, you may have to upgrade the system. You can find application requirements on the vendor's website. For example, if you need to install the Microsoft Power BI Desktop application, you would search Microsoft's site to find the system requirements for this particular application. The documentation would reveal several different requirements, such as the following:
When an application's requirements are published, they are the bare minimum requirements, not the optimal recommended specifications. The requirements are a conservative estimate so that a potential customer is not scared away with expensive hardware upgrades. When extremely low requirements are advertised, we always recommend calling a presales support person. You should describe how the application will be used and seek their recommendations for optimizing the application's performance. You will often find that they are double or triple the advertised bare minimum requirements.
Storage is probably the biggest consideration when evaluating the installation of an application that will store user data. It is extremely difficult to gauge how much storage should be set aside for the application for user data. If too much storage is allocated, the precious commodity might go unused for the life of the application. If too little is allocated, you may have to upgrade the storage—or, worse, you could fill the entire drive up and be forced to do it in a panic.
The most common architectures for CPUs are 32‐bit and 64‐bit. They are noted as x86 for 32‐bit and x64 or AMD64 for 64‐bit. The 64‐bit CPU extension were adopted from the AMD version of their 64‐bit CPU. When installing an operating system, the operating system requirements must be met or exceeded for the CPU architecture type. If you are installing a 64‐bit operating system, then you must have at least a 64‐bit CPU. If you are installing a 32‐bit operating system, you can install it on a 32‐bit CPU or 64‐bit CPU. However, if you have a 64‐bit CPU it always makes sense to install a 64‐bit operating system, since it gives you maximum flexibility and you typically can't upgrade to a different architecture.
Operating system compatibility is a consideration for older applications that are still required by organizations. Application compatibility has been incorporated into the Windows operating system since Windows XP. It allows for an application to behave as though the operating system was an older operating system. For example, over the years, Windows has become more restrictive of local permissions on the filesystem. If an application that was written for an older operating system expects to write to a Registry key, application compatibility will allow it to think it's directly writing to the location.
Once the requirements are met for an application and the considerations are investigated, you are ready to install the application. You must now consider how the application will be installed; the number of machines on which the application will be installed will factor into this consideration. This section discusses several different ways that an application can be installed.
If the installation is a one‐off installation, then a CD/DVD drive might be your best option. In recent years, applications have even been shipped on Blu‐ray media. If you must install the application on several different PCs, then this method may not be the preferred installation method. When installing from optical media, even the fastest optical drive is slow compared to other methods, such as USB.
Although optical discs have been around since the mid‐1990s and have been the most popular method of installing applications and operating systems, the optical disk is quickly becoming a relic. When you use a virtualization product such as Hyper‐V or VMware Workstation, the optical disc is just too clumsy and slow to use. Mountable ISO images of the physical media has become the new norm. When you need to install an operating system or application, you simply download the media from the vendor, mount the ISO, and install it as if you had a virtual optical drive. ISO sizes will vary from 500 MB to 9.6 GB with normal CD and DVD formats, but Blu‐ray discs can be up to 45 GB.
Applications are outgrowing optical media such as CD‐ROM and DVD‐ROM, so USB drives have become popular. USB drives are faster and bigger than optical media. If a handful of computers require the application, then this might be a better option. However, the disadvantage is that simultaneous installations are limited to the number of flash drives you have with the application loaded. Another common problem with USB drive installations is that the USB drives are lost from time to time or inadvertently overwritten. For this reason, many application vendors lock their drives so they can't be overwritten and repurposed.
When you need to install an application on many different PCs, a network installation should be your first choice. The application is typically uploaded to a file share by the administrator, and then the file share is set to read‐only access for the user who is performing the installation. Depending on the speed of the network, this could be the fastest method to install an application on several different PCs simultaneously.
There are several different methods for deploying an application over the network. Each method depends on the number and location of the computers.
These installations are used when the administrator of the PC will start the installation manually. This method is preferable when the administrator is expected to answer specific questions during the installation, such as where to install the application.
The installation of any application generally requires that the user be an administrator of the operating system or have elevated privileges to install an application. If user‐initiated installation is chosen as the method for deploying an application, you must be sure that the user account has the appropriate permissions to install the application.
Two different types of automated installations can be employed by the administrator: push installations and pull installations. Either installation type is used when the conformity of the installation is required.
Automated installation products, such as Microsoft Endpoint Configuration Manager (MECM), can deploy applications to multiple PCs with a push installation. Microsoft SCCM, formerly called Microsoft System Center Configuration Manager (SCCM), is considered the Swiss Army knife of installation and reporting services. Each client in the network will have the MECM agent installed prior to the push installation of the application. The agent will then be responsible for reporting on the current operating system. This allows you to collect information for determining if you can satisfy the requirements of the application. The agent can then be used as the contact point for the push installation of the application.
Group Policy can also be used to automate the installation of applications. The Group Policy method is a pull‐based installation method, where the client will pull the application from the network share. This method contains no agent, so reporting is not available for client resources, installation requirements, or installation status. The benefit is that it requires very little infrastructure if you have Microsoft Active Directory (AD) installed already.
Regardless of the installation type, push‐ or pull‐based, most automated installation methods do not require the user logged in to be an administrator. MECM can be configured to use a system account that has elevated privileges in the target operating system to install the application. Group Policy pull‐based installations can also be configured to use a system account that has elevated privileges to install the application.
When an application is installed in the operating system, the overall security of the operating system can potentially be compromised. Of course, it is not the intent of the application to weaken the security of the operating system, but vulnerabilities in the application can exist. Application developers realize that their code is not perfect and that vulnerabilities can exist, so many applications now include self‐updating capabilities.
Applications weakening the security of the local device should not be your only concern. Applications can actually weaken the security of the entire network, especially if they are not updated frequently. Applications that operate over the network with client‐server functionality pose the largest security risks, since they allow for remote exploitation of vulnerabilities. One measure that can be employed with network applications that are not updated frequently is to firewall the services. There are many other active measures, which we will discuss in further detail in Chapter 18, “Securing Operating Systems.”
Once the requirements of the application are satisfied, the security of the application is evaluated, and an installation method is chosen, there are several other considerations that should be reviewed before installing a new application. These considerations must be evaluated to ensure that the installation of the application does not impose unintended problems on the organization and network.
Microsoft released Windows 10 on July 29, 2015, and offered a free upgrade from Windows 7, Windows 8, and Windows 8.1. The Windows 10 upgrade was initiated by the Get Windows 10 (GWX) upgrade tool, which was installed via a Windows Update. This tool was automatically installed on Windows operating systems that were eligible for the upgrade and just about forced users to upgrade. The free upgrade was to be limited to the first year of the Windows 10 release and expired on July 29, 2016. It has been reported that Microsoft still allows the upgrade and activation of Windows 10/11 for free, but your luck may vary. For these reasons, Windows 10 was rapidly adopted and continues to be adopted as other Windows operating system versions end mainstream support.
Windows 10 has had a total of 15 editions released, with five of the editions being discontinued (Windows 10 Mobile, Windows 10 Mobile Enterprise, IoT Mobile, Windows 10 S, and Windows 10X). Three of the editions are device‐specific editions for IoT (Internet of Things), Holographic, and Windows 10 Team, which is loaded on the Surface Hub (interactive whiteboard). Only two of the 15 editions were made available in the retail channel: Windows 10 Home and Windows 10 Pro. Microsoft has made it easy for the retail consumer to pick the right edition of Windows 10, narrowing down retail choices to the two editions. Technically, there is a third retail edition called Windows 10 Pro for Workstations, which is preinstalled on high‐end hardware for high‐performance computing (HPC) requirements and allows up to 4 CPUs and 6 TB of RAM.
The editions available for Microsoft volume licensing options are Windows 10 Enterprise, Windows 10 Enterprise LTSC (Long‐Term Servicing Channel), Windows 10 Education, and Windows 10 Pro Education. Windows 10 volume license editions include features such as AppLocker, BranchCache, and DirectAccess, just to name a few. Windows 10 Enterprise LTSC is an edition that is released every 2–3 years and is supported for 10 years after its initial release. Windows 10 LTSC receives normal Windows updates for security but does not receive feature upgrades. The Microsoft Store and bundled apps are also omitted from the Windows 10 Enterprise LTSC edition. Both Windows 10 Enterprise and Windows 10 Education editions have the same features over and above Windows 10 Pro and Windows 10 Home editions. Windows 10 Education editions are made available only to academic institutions and K–12 schools.
Although there are 15 different editions of Windows, the 220‐1102 exam focuses on the following four editions:
There are 32‐bit and 64‐bit versions available for each of the editions listed except Windows 10 Pro for Workstations, since it is specifically used for high‐performance computing. Microsoft released Windows 10 as a successor to Windows 8.1, with the key goal of bridging the gap for cloud‐based services while polishing the Windows 8.1 interface. With the introduction of Windows 11, Microsoft only offers Windows 11 in a 64‐bit version.
If you want to do an upgrade of Windows 10/11 instead of a clean installation, review the recommended upgrade options in Table 13.2. This is a major change from prior upgrade paths, where there were specific restrictions. If you upgrade from Windows 7 Starter to Windows 10/11 Pro, you'll need to provide the new activation key for the higher edition. Although you can switch editions during an upgrade, it is recommended that you use like‐to‐like editions of Windows. A like‐to‐like edition is an edition with similar functionality, such as Windows 7 Home Premium to Windows 10/11 Home, or Windows 7 Professional to Windows 10/11 Professional.
Existing operating system | Windows 10/11 Home | Windows 10/11 Pro | Windows 10/11 Education |
---|---|---|---|
Windows 7 Starter | Yes | No | No |
Windows 7 Home Basic | Yes | No | No |
Windows 7 Home Premium | Yes | No | No |
Windows 8/8.1 Home Basic | Yes | No | No |
Windows 7 Professional | No | Yes | No |
Windows 7 Ultimate | No | Yes | No |
Windows 8/8.1 Education | No | Yes | Yes |
Windows 8/8.1 Pro | No | Yes | No |
TABLE 13.2 Windows 10 recommended upgrade options
There are some prerequisites when performing an upgrade to Windows 10/11 and you want to retain settings and applications. The first prerequisite is that you must have at least Windows 7 with Service Pack 1 installed. So, if you have Vista installed, you would need to first perform an in‐place upgrade to Windows 7 SP1. Then you could perform an in‐place upgrade to Windows 10/11. An in‐place upgrade is an upgrade in which you upgrade the current operating system to the desired version.
It is also recommended that you upgrade Windows 8 to Windows 8.1 before upgrading to Windows 10/11, but that is not a strict requirement. It is always recommended to have the highest level of service pack or edition prior to performing an upgrade. It minimizes problems later, during the upgrade process.
Another major restriction is that you cannot switch architecture during an in‐place upgrade. If you have a 32‐bit installation of Windows, then you will need to upgrade to a 32‐bit version of Windows 10. You can switch architectures only when performing a clean installation, in which the old operating system is overwritten. This means that you must back up and restore settings, files, and applications. However, the advantages of upgrading to a 64‐bit version may outweigh the benefits of bothersome process of reinstalling the applications.
It is always recommended to have a backup of your installation before you begin upgrading. It is equally important to check the application compatibility before upgrading. You should always check with the vendor of the application prior to upgrading to make sure they support Windows 10/11. An upgrade of the application may be required, or a completely new version of the application may be needed.
In addition to checking application compatibility prior to upgrading, you should check the hardware compatibility. Most of the time, a simple upgraded driver is required; however, sometimes you will find that the hardware is not compatible and that new hardware is required. In rare instances, the PC hardware is just not compatible and a new PC is required. However, this is usually the case with laptops, since components are not easily upgradable or upgrades are impossible. So do your homework prior to upgrading and know what to expect after the upgrade process is complete.
Every Windows operating system edition has a set of features that are bundled into the edition being purchased. The difference between the four main editions of the operating system—Home, Pro, Pro for Workstations, and Enterprise—is how the edition is purchased and its accompanying features. Windows 10/11 Home and Windows 10/11 Pro are both available through retail channels and can be purchased off the shelf. Windows 10/11 Pro for Workstations is typically preinstalled on high‐performance workstations. Windows 10/11 Enterprise edition is unavailable without a volume license agreement from Microsoft. Table 13.3 compares the features of the four main editions of Windows 10/11.
Edition | Maximum RAM supported | Maximum physical CPUs supported (multiple cores) | Notes |
---|---|---|---|
Home | 128 GB | 2 | Lacks support for Remote Desktop (client only), BitLocker, Windows To Go, Hyper‐V, joining to a domain, and participating in Group Policy. This edition is strictly for consumer use. |
Pro | 2 TB | 2 | Can join a Windows server domain; includes Remote Desktop Server, BitLocker, Windows To Go, Hyper‐V, and participating in Group Policy. |
Pro for Workstations | 6 TB | 4 | Includes all features of the Pro edition, with support for 6 TB of RAM and up to 4 physical CPUs. |
Enterprise | 6 TB | 2 | Includes BitLocker, support for domain joining and Group Policy, DirectAccess, AppLocker, and BranchCache. This edition is available only through a volume license subscription. |
TABLE 13.3 Windows 10/11 features and editions
The following is a list of features introduced and associated with the Windows 10/11 operating system that you should know for the exam, along with a brief description of each:
FIGURE 13.2 The Cortana interface
FIGURE 13.3 The Microsoft Edge web browser
FIGURE 13.4 The Windows Action Center
FIGURE 13.5 The BitLocker Control Panel applet
FIGURE 13.6 The Task View window
FIGURE 13.7 The Windows 10 Start menu
FIGURE 13.8 The Windows 10 Lock Screen and Spotlight
FIGURE 13.9 Microsoft Defender Virus & Threat Protection settings
FIGURE 13.10 The Settings app in Windows 10
FIGURE 13.11 An application context menu
FIGURE 13.12 Windows Snap Assist
If you've worked with older versions of Windows (such as Windows 7), you'll notice that it looks similar to the current Windows interface. While there are some differences, most of the basic tasks are accomplished in almost identical fashion on everything from a Windows 95 workstation computer on up. Also, although the tools that are used often vary between the different OSs, the way that you use those tools remains remarkably consistent across platforms.
We will begin with an overview of the common elements of the Windows GUI. We will then look at some tasks that are similar across Windows operating systems. You are encouraged to follow along by exploring each of the elements as they are discussed.
The Desktop is the virtual desk on which all your other programs and utilities run. By default, it contains the Start menu, the taskbar, and a number of icons. The Desktop can contain additional elements, such as shortcuts or links to web page content. Because the Desktop is the foundation on which everything else sits, the way that the Desktop is configured can have a major effect on how the GUI looks and how convenient it is for users. When you click the lower‐left corner of the Desktop, the Start menu appears. (Right‐clicking the Windows icon in Windows 8.1 and above displays a set of operating system functions.)
You can change the background patterns, screen saver, color scheme, and size of elements on the Desktop by right‐clicking any area of the Desktop that doesn't contain an icon. The menu that appears, similar to the one shown for Windows 10 in Figure 13.13, allows you to do several things, such as create new Desktop items, change how your icons are arranged, or select a special command called Properties or Personalize.
FIGURE 13.13 The Windows 10 Desktop context menu
When you right‐click the Desktop in Windows 10 and choose Personalize, you will see the Display Settings screen, as shown in Figure 13.14.
FIGURE 13.14 The Windows 10 Display Settings screen
With the rapid adoption of Windows 10, this book will cover the most current version of Windows 10 (21H2), since the CompTIA objectives focus on Windows 10. However, every prior operating system Microsoft has produced has similar settings for personalization. We will cover the main ones in the Display Settings window for Windows 10, but Windows 11 is identical in functionality.
In Exercise 13.1, you will see how to change a screen saver.
The taskbar (see Figure 13.15) is another standard component of the Windows interface. Note that although the colors and feel of the Desktop components, including the taskbar, have changed throughout the operating system versions, the components themselves are the same. In versions prior to Windows 10, the taskbar contains two major items: the Start menu and the notification area, previously called the system tray (systray). The Start menu is on the left side of the taskbar and is easily identifiable: it is a button that has the Windows logo or the word Start on it, or in the case of Windows 7, it is the large Windows icon. The system tray is located on the right side of the taskbar and contains only a clock by default, but other Windows utilities (for example, screen savers or antivirus utilities) may put their icons there to indicate that they are running and to provide the user with a quick way to access their features.
FIGURE 13.15 The Windows 10 taskbar
Windows also uses the middle area of the taskbar. When you open a new window or program, it gets a button on the taskbar with an icon that represents the window or program as well as the name of the window or program. To bring that window or program to the front (or to maximize it if it was minimized), click its button on the taskbar. As the middle area of the taskbar fills with buttons, the buttons become smaller so that they can all be displayed.
Windows 8/8.1 and Windows 10/11 allow you to pin commonly used programs to the taskbar. The icon will appear on the taskbar, and when the program is running, a line will appear under the icon. You can pin any running task by right‐clicking the icon in the taskbar and selecting Pin To Taskbar. You can just as easily remove pinned icons by right‐clicking them and selecting Unpin From Taskbar.
You can increase the size of the taskbar as well as move its position on the Desktop. Either of these tasks requires you to first unlock the taskbar, by right‐clicking the taskbar and deselecting Lock The Taskbar. (By default, it is enabled.) You can then move the mouse pointer to the top of it and pause until the pointer turns into a double‐headed arrow. Once this happens, click the mouse and move it up to make the taskbar bigger, or move it down to make it smaller. You can also click the taskbar and drag it to the top or side of the screen.
You can make the taskbar automatically hide itself when it isn't being used (thus freeing that space for use by the Desktop or other windows). In Exercise 13.2, we will show you how to do this.
Back when Microsoft officially introduced Windows 95, it bought the rights to use the Rolling Stones’ song “Start Me Up” in its advertisements and at the introduction party. Microsoft chose that particular song because the Start menu was the central point of focus in the new Windows interface, as it was in all subsequent versions.
To display the Start menu, you can press the Windows key on your keyboard at any time. You can also click the Windows logo button in the taskbar in Windows 10 and 8.1. You'll see a Start menu similar to the one shown in Figure 13.16 for Windows 10. The Windows 11 Start menu is functionally similar. The only difference is that the layout is centered in the screen.
From the Start menu, you can select any of the various options the menu presents. An arrow pointing down on a folder indicates that more items exist in the folder. To select a submenu for an icon, move the mouse pointer over the icon and right‐click. The submenu will appear, allowing you to pin, rate, and uninstall an application, just to name a few options.
The following sections describe the principal features of the Windows 10/11 Start menu.
FIGURE 13.16 Sample Windows 10 Start menu
Windows 10 introduced Cortana, a personal desktop assistant for the Windows operating system. In Windows 10, Cortana is enabled by default and allows you to search without clicking the Start menu. The search box is located to the right of the Start menu. You just need to start typing. Cortana will search apps installed, documents, and the web. Cortana will even come up with suggestions, as shown in Figure 13.17. You don't even need to type; you can click the microphone in the search box and speak your search. With Windows 11, Cortana has become an app and is no longer integrated with the Start menu.
FIGURE 13.17 The Cortana personal desktop assistant
Windows has always included a very good Help system. With the addition of Cortana, Microsoft had originally elected to leave help and support to web searches. However, Microsoft has released the Get Help app in the Microsoft Store, and in later operating systems, it come preinstalled. Hardware vendors may also add a help and support center for the hardware platform.
It is possible to run commands and utilities from the Cortana search box or from the Run dialog box. To access the Run dialog box in Windows 10, simply press the Windows key and the R key at the same time. To execute a particular program, type its name in the Open field. If you don't know the exact path, you can browse to find the file by clicking the Browse button. Once you have typed in the executable name, click OK to run the program.
Applications can easily be started from the Run window. You often will find it faster to open programs this way than to search for their icons in the Start menu maze. In Exercise 13.2, you will see how to start a program from the Run window.
Windows operating systems are very complex. At any one time, many files are open in memory. If you accidentally hit the power switch and turn off the computer while these files are open, there is a good chance that they will be corrupted. For this reason, Microsoft has added the Shut Down command under the Start menu. The command appears as an icon of an on/off button without a label. When you select this option, Windows presents you with several choices. The submenu will display Sleep, Shutdown, and Restart.
Icons are shortcuts that allow a user to open a program or a utility without knowing where that program is located or how it needs to be configured. Icons consist of the following major elements:
The label and graphic of the icon typically tell the user the name of the program and give a visual hint about what that program does. The icon for the Notepad program, for instance, is labeled Notepad, and its graphic is a notepad. By right‐clicking an icon once, you make it the active icon and a drop‐down menu appears. One of the selections is Properties. Clicking Properties brings up the icon's attributes (see Figure 13.18), and it is the only way to see exactly which program an icon is configured to start and where the program's executable is located. You can also specify whether to run the program in a normal window or maximized or minimized.
FIGURE 13.18 The Properties window of an application
Additional functionality has been added to an icon's properties to allow for backward compatibility with older versions of Windows (known as compatibility mode). To configure this, click the Compatibility tab and specify the version of Windows for which you want to configure compatibility. Note that you cannot configure compatibility if the program is part of the version of Windows that you are using. Figure 13.19 shows the settings available for an older program.
FIGURE 13.19 The Compatibility settings possible with an older program
This feature is helpful if you own programs that used to work in older versions of Windows but no longer run under the current Windows version. In addition, you can specify different display settings that might be required by older programs.
In addition to the options in your Start menu, a number of icons are placed directly on the Desktop in Windows. The Recycle Bin icon is one of these icons. In the latest version of Windows 10 (21H2), the Microsoft Edge icon can also be found on the Desktop. In older versions of Windows, the Computer icon could also be found on the desktop.
The Computer Icon If you double‐click the Computer icon, it displays a list of all the disk drives installed in your computer. In addition to displaying disk drives, it displays a list of other devices attached to the computer, such as scanners, cameras, and mobile devices. The disk devices are sorted into categories such as Hard Disk Drives, Devices With Removable Storage, Scanners And Cameras, and so on.
You can delve deeper into each disk drive or device by double‐clicking its icon. The contents are displayed in the same window.
In addition to allowing you access to your computer's files, the This PC icon on the Desktop lets you view your machine's configuration and hardware, also called the System Properties.
The Network Icon Another icon in Windows relates to accessing other computers to which the local computer is connected, and it's called Network (known as My Network Places in previous versions).
Opening Network lets you browse for and access computers and shared resources (printers, scanners, media devices, and so on) to which your computer can connect. This might be another computer in a workgroup. It is important to note that network browsing will not operate if the PC is joined to a domain. Network browsing in Windows 7 and above is restricted to the Workgroup mode.
Through the properties of Network, you can configure your network connections, including LAN and dial‐up connections (should you still live in an area where a now antiquated dial‐up connection is required for Internet access).
You can add common Desktop icons by navigating to Settings ➢ Personalization ➢ Themes and clicking Desktop Icon Settings. A dialog box will appear that allows you to add and change the common Desktop icons, as shown in Figure 13.20.
FIGURE 13.20 Common icons can easily be added to the Desktop.
The Recycle Bin All files, directories, and programs in Windows are represented by icons. These icons are generally referred to as objects. When you want to remove an object from Windows, you do so by deleting it. Deleting doesn't just remove the object, though; it also removes the ability of the system to access the information or application that the object represents. Therefore, Windows includes a special folder where all deleted files are placed: the Recycle Bin. The Recycle Bin holds the files until it is emptied or until you fill it. It gives users the opportunity to recover files that they delete accidentally. By right‐clicking the Recycle Bin icon, you can see how much disk space is allocated. Some larger files that cannot fit in the Recycle Bin will be erased after a warning.
You can retrieve a file that you have deleted by opening the Recycle Bin and then dragging the file from the Recycle Bin to where you want to restore it. Alternatively, you can right‐click a file and select Restore. The file will be restored to the location from which it was deleted.
To erase files permanently, you need to empty the Recycle Bin, thereby deleting any items in it and freeing the hard drive space they took up. If you want to delete only specific files, you can select them in the Recycle Bin, right‐click, and choose Delete. You can also permanently erase files (bypassing the Recycle Bin) by holding down the Shift key as you delete them (by dragging the file and dropping it in the Recycle Bin, pressing the Del key, or clicking Delete on the file's context menu). If the Recycle Bin has files in it, its icon looks like a full trash can; when there are no files in it, it looks like an empty trash can.
We have now looked at the nature of the Desktop, the taskbar, the Start menu, and icons. Each of these items was created for the primary purpose of making access to user applications easier. These applications are, in turn, used and managed through the use of windows—the rectangular application environments for which the Windows family of operating systems is named. We will now examine how windows work and what they are made of.
A program window is a rectangular area created on the screen when an application is opened within Windows. This window can have a number of different forms, but most windows include at least a few basic elements.
Several basic elements are present in a standard window. Figure
13.21 shows the control box, title bar, Minimize/Maximize button,
Close button, and resizable border in the text editor Notepad
(notepad.exe
) that has all the basic window elements—and
little else.
FIGURE 13.21 The basic elements of a window, as seen in Notepad
The basic window elements are as follows:
Not every element is found in every window, because application programmers can choose to eliminate or modify each item. Still, in most cases, they will be consistent, with the rest of the window filled in with menus, toolbars, a workspace, or other application‐specific elements. For instance, Microsoft Word, the program with which this book was written, adds a Ribbon control. It also has a menu bar, a number of optional toolbars, scroll bars at the right and bottom of the window, and a status bar at the very bottom. Application windows can become quite cluttered.
Notepad is a very simple Windows program. It has only a single menu bar and the basic elements shown in Figure 13.21. It also starts a simple editor, where you can edit a file that already exists or create a new one. Figure 13.22 shows a Microsoft Word window. Both Word and Notepad are used to create and edit documents, but Word is far more configurable and powerful and therefore has many more optional components available within its window.
There is more to the Windows interface than the specific parts of a window. Windows also are movable, stackable, and resizable, and they can be hidden behind other windows (often unintentionally).
FIGURE 13.22 A window with many more components, as seen in Microsoft Word
When an application window has been launched, it exists in one of three states:
When one program is open and you need to open another (or maybe you need to stop playing a game because your boss has entered the room), you have two choices. First, you can close the program currently in use and simply choose to reopen it later. If you do this, however, the contents of the window (your current game, for example) will be lost, and you will have to start over. Once the program has been closed, you can move on to open the second program.
The second option is to minimize the active window. Minimizing the game window, for example, removes the open window from the screen and leaves the program open but displays nothing more than an icon and title on the taskbar. Later, you can restore the window to its previous size and finish the game in progress.
File management is the process by which a computer stores data and retrieves it from storage. Although some of the file‐management interfaces across Windows may have a different look and feel, the process of managing files is similar across the board.
In order for a program to run, it must be able to read information off the hard disk and write information back to the hard disk. To be able to organize and access information—especially in larger new systems that may have thousands of files—it is necessary to have a structure and an ordering process.
Windows provides this process by allowing you to create directories, also known as folders, in which to organize files. Windows also regulates the way that files are named and the properties of files. The filename for each file created in Windows has to follow certain rules, and any program that accesses files through Windows must also comply with these rules:
working.txt
and another called WORKING.TXT
in the same folder. To
Windows, these filenames are identical, and you can't have two files
with the same filename in the same folder. We'll get into more detail on
this topic a little later.In Windows 3.x and DOS, filenames were limited to eight characters and a three‐character extension, separated by a period—known as the 8.3 file‐naming convention. Windows 95 introduced long filenames, which allowed the 255‐character filename convention.
The Windows filesystem is arranged like a filing cabinet. In a filing cabinet, paper is placed into folders, which are internal dividers, which are in a drawer of the filing cabinet. In the Windows filesystem, individual files are placed in subdirectories that are inside directories, which are stored on different disks or different partitions.
Windows also protects against duplicate filenames; no two files on
the system can have exactly the same name and path. A path
indicates the location of the file on the disk; it is composed of the
letter of the logical drive on which the file is located and, if the
file is located in a folder or subfolder, the names of those
directories. For instance, if a file named pagefile.sys
is
located in the root of the C:
drive—meaning it is not
within a folder—the path to the file is C:\pagefile.sys
. As
another example, if a file called notepad.exe
is located
under Windows under the root of C:
, then the path to this
file is C:\Windows\notepad.exe
.
Common filename extensions that you may encounter include
.exe
(for executable files, aka applications),
.dll
(for dynamic linked library files), .sys
(for system files), .log
(for log files), .drv
(for driver files), and .txt
(for text files). Note that
DLL files contain additional functions and commands that applications
can use and share. In addition, specific filename extensions are used
for the documents created with each application. For example, the
filenames for documents created in Microsoft Word have a
.doc
or .docx
extension. You'll also encounter
extensions such as .mpg
(for video files),
.mp3
(for music files), .png
and
.tif
(for graphics files), .htm
and
.html
(for web pages), and so on. Being familiar with
different filename extensions is helpful in working with the Windows
filesystem.
Although it is technically possible to use the command‐line utilities provided within the command prompt to manage your files, this generally is not the most efficient way to accomplish most tasks. The ability to use drag‐and‐drop techniques and other graphical tools to manage the filesystem makes the process far simpler. The File Explorer is a utility that allows you to accomplish a number of important file‐related tasks from a single graphical interface.
Here are some of the tasks you can accomplish using Explorer:
You can access many of these functions by right‐clicking a file or folder and selecting the appropriate option, such as Copy or Delete, from the context menu.
Using Explorer is simple. A few basic instructions are all you need to start working with it. First, the Explorer interface has a number of parts, each of which serves a specific purpose. The top area of Explorer is dominated by a set of menus and toolbars that give you easy access to common commands. The main section of the window is divided into two panes: the left pane displays the drives and folders available, which is called the navigation pane, and the right pane displays the contents of the currently selected drive or folder, which is called the results pane. In recent versions of Windows, the Navigation pane is turned off. The following list describes some common actions in Explorer:
FIGURE 13.23 Windows 10 File Explorer
Besides simplifying most file‐management commands as shown here, Explorer allows you to complete a number of disk‐management tasks easily. For example, you can format and label removable media, which is discussed further in Chapter 15, “Windows Administration.”
Future chapters will delve further into operating systems and the tools, utilities, and features available with each. There is also additional coverage, as applicable, in the chapters on troubleshooting. For purposes of exam study, Table 13.4 offers a complete list of the features for each of the Windows operating systems that you need to know for the exam. It also details which chapter(s) in this book has more coverage of that particular topic.
Feature | Purpose | More Information |
---|---|---|
BitLocker | Encrypts drives; available in each OS but not in every edition | Chapter 18 |
Domain access vs. workgroup | Shared security for computers and users | Chapter 15 |
Desktop styles/user interface | Customization of the desktop and the user interface | Chapter 14 |
Remote Desktop Protocol (RDP) | Allows users and administrators to connect remotely to obtain a desktop session | Chapter 20 |
Group Policy | A mechanism inside of Active Directory that allows for the management of user and computers | Chapters 14, 17 |
TABLE 13.4 Windows features
In this chapter, you learned about the basic operating systems, application installation, and the Windows 10/11 features. Additionally, we covered the basics of the Windows structure and window management. Because Windows is a graphical system, the key to success in learning to use it is to explore the system to find out what it can do. You will then be better prepared to decipher later what a user has done.
First, we explored the various operating systems you may encounter, along with their characteristics, such as their life cycles, categories, and minimum requirements.
Next, we discussed applications as they apply to the operating system and underlying architecture, as well as the various factors that should be evaluated before installing the application.
Finally, we introduced Windows 10/11 and its various editions. We then covered some basic Windows management concepts, including file management, as well as the folder structure. We also discussed using approved hardware and updating Windows.
With the basic knowledge gained in this chapter, you are now ready to learn how to interact with the most commonly used tools, the subject of the following chapter.
The answers to the chapter review questions can be found in Appendix A.
run
cmd
command
open
cmd
in the
Start box followed by the program nameYou will encounter performance‐based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answer compares to the authors’, refer to Appendix B.
Your organization is planning to upgrade its operating system to Windows 10. Currently all of the laptops used in your organization use Windows 8.1 Pro. The organization eventually wants to roll out the Windows 10 feature of BranchCache. How should you proceed to accommodate the upgrade and the future feature needs?
THE FOLLOWING COMPTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
The previous chapter introduced the basic components of the Windows operating systems. This chapter builds on that and focuses on the configuration of the Windows 10 operating system.
In the following sections, we will look at the Microsoft GUI from the ground up. In Chapter 13, “Operating System Basics,” we took a detailed look at its key components, and we will build on that with an exploration of basic tasks for Windows 10/11. Although the exam objectives are focused on Windows 10/11, most of these configuration concepts can be applied to Windows 8.1, 8, and even Windows 7 in many cases.
Microsoft has included a number of tools with each iteration of Windows to simplify system configuration. Although some tools have specific purposes and are used only on rare occasions, you will come to rely on a number of tools and access them on a regular basis. It is this latter set that we will examine in the following sections.
Task Manager lets you shut down nonresponsive applications selectively in all Windows versions. In current versions of Windows, it can do so much more, allowing you to see which processes and applications are using the most system resources, view network usage, see connected users, and so on. To display Task Manager, press Ctrl+Alt+Delete and click the Task Manager button or option. You can also right‐click an empty spot in the taskbar and choose Task Manager from the context menu.
Depending on the Windows version, Task Manager has various tabs. Figure 14.1 shows the common default display in Windows 10/11, but other versions vary from the seven tabs shown here: Processes, Performance, App History, Startup, Users, Details, and Services.
FIGURE 14.1 The default Task Manager in Windows
Let's look at these tabs in more detail:
FIGURE 14.2 The Details tab of Task Manager
To end a process, right‐click it in the list and click End Task. Be careful with this choice, because ending some processes can cause Windows to shut down. If you don't know what a particular process does, you can look for it in any search engine and find a number of sites that explain it.
FIGURE 14.3 The various metrics for the Details tab
You can also change the priority of a process in Task Manager's Details tab by right‐clicking the name of the process and choosing Set Priority. The six priorities, from lowest to highest, are as follows:
If you decide to change the priority of an application, you'll be warned that doing so may make it unstable. You can generally ignore this option when changing the priority to Low, Below Normal, Above Normal, or High, but you should heed this warning when changing applications to the Realtime priority. Realtime means that the processor gives precedence to this process over all others—over security processes, over spooling, over everything—and this is sure to make the system unstable.
Task Manager changes the priority only for that instance of the running application. The next time the process is started, priorities revert back to that of the base (typically, Normal).
Some of the items are beyond the scope of this book, but it's good to know that you can use the Performance tab to keep track of system performance. Note that the number of processes, CPU usage percentage, and commit‐charge information always appear at the bottom of the Task Manager window, regardless of which tab you have currently selected.
FIGURE 14.4 The Performance tab of Task Manager
msconfig
command in prior versions of Windows, such
as Windows 7.Use Task Manager whenever the system seems bogged down by an unresponsive application.
Microsoft created the Microsoft Management Console (MMC) interface as
a frontend in which you can run administrative and configuration tools.
Many administrators don't even know that applications they use regularly
run within an MMC. In the following sections, we will cover many of the
different MMC snap‐ins you will use in your day‐to‐day administration
and configuration of the operating system. You can start the MMC by
pressing Windows + R to open Run, typing mmc in the box, and then
clicking OK. Once the MMC is started, you can create a custom MMC,
adding the snap‐ins discussed below. Click File ➢ Add Or Remove
Snap‐ins, and then select which snap‐ins to add. For more information
about the MMC, visit https://docs.microsoft.com/en-us/troubleshoot/windows-server/system-management-components/what-is-microsoft-management-console
.
Windows includes a piece of software to manage computer settings: the Computer Management console. The Computer Management console can manage more than just the installed hardware devices; it can manage all the services running on a computer in addition to Device Manager, which functions almost identically to the one that has existed since Windows 9x. It also contains Event Viewer, which shows any system errors and events as well as methods to configure the software components of all the computer's hardware.
To access the Computer Management console, right‐click the Start menu and select it from the context menu. (In Windows 8, it's on the Start screen.)
After you are in Computer Management, you will see all the tools available, as shown in Figure 14.5. This is one power‐packed interface and includes the following system tools:
FIGURE 14.5 Computer Management
Event Viewer (eventvwr.msc
) is an MMC snap‐in
that shows a lot of detailed information about what is running on your
operating system. You can start it in several different ways—for
example, by clicking Start, typing Event Viewer, and selecting the Event
Viewer app, or by right‐clicking the Start button and selecting Event
Viewer from the context menu. In addition, you can add it as a snap‐in
inside the MMC or press Windows key + R, type
eventvwr.msc
, and press
Enter.
Event Viewer should be the first place you look when you are trying to solve a problem whose solution is not evident. The system and applications will often create an entry in Event Viewer that can be used to verify operation or diagnosis problems, as shown in Figure 14.6.
FIGURE 14.6 Event Viewer
Table 14.1 highlights the three main event logs that you should be concerned with for the exam. Each feature in Windows has the ability to store its events in specific log files, in application and service logs.
Event Log | Description |
---|---|
Application | Events generated by applications installed on the operating system |
Security | Events generated by the Security Reference Monitor in the Executive kernel |
System | Events generated by the operating system |
TABLE 14.1 Event Viewer logs
Although you might think that all the security‐related information is in the Security log, you're only half right. The Security log is used by the Security Reference Monitor inside the Executive kernel. It is responsible for reporting object audit attempts. Examples of object audit attempts include file access, group membership, and password changes.
Most of the useful security‐related information will be in the application and system logs. Using these logs, you can see errors and warnings that will alert you to potential security‐related problems. When you suspect an issue with the operating system or an application that interacts with the operating system, you should check these logs for clues. The event log won't tell you exactly what is wrong and how to fix it, but it will tell you if there is an issue that needs to be investigated.
The Disk Management (diskmgmt.msc
) snap‐in is used to
view disk information, such as volumes configured on the physical disk
and the filesystems that are formatted on the volume. Disk
Management, shown in Figure 14.7, isn't used just to view
information; you can also use it to partition volumes on a new or
existing disk, format filesystems, and mount volumes to drive letters.
These are just a few configuration tasks; we will cover Disk Management
later in this chapter.
Task Scheduler (taskschd.msc
), accessible beneath
Computer Management or Administrative Tools in Control Panel, allows you
to configure an application to run automatically or at any regular
interval (see Figure 14.8). A number of terms are
used to describe the options for configuring tasks: action
(what the task actually does), condition (an optional
requirement that must be met before a task runs), setting (any
property that affects the behavior of a task), and trigger (the
required condition for the task to run).
For example, you could configure a report to run automatically (action) every Tuesday (trigger) when the system has been idle for 10 minutes (condition), and only when requested (setting).
FIGURE 14.7 Disk Management
FIGURE 14.8 Task Scheduler
Device Manager (devmgmt.msc
) is an indispensable tool
for the management of peripherals and components attached to the
computer. You can view all the devices in the system, as shown in Figure
14.9. Device Manager has been around since Windows 95, and it hasn't
changed all that much. Because computer systems have evolved and all
newer computer systems are plug‐and‐play capable, this tool is usually
only used when you are in doubt that a component is working or you're
troubleshooting a problem, or when you want to check the driver. Device
Manager was used in the past a lot more since most peripherals had to be
manually configured.
FIGURE 14.9 Device Manager
Device Manager allows you to manually update the driver for a device, roll back a driver to a prior version, uninstall a device, and disable a device. Some peripherals, such as the network card, allow you to configure if you want the device to wake the computer. Some peripherals, such as a disk drive or USB device, can be configured to fall asleep if they are not used for a specified period of time.
Certificate Manager (certmgr.msc
) is used to view and
manage certificates used by the web browser and the operating system, as
shown in Figure 14.10. Certificates are used
for a number of reasons; the most common are encryption and digital
signatures that provide trust.
FIGURE 14.10 Certificate Manager
Certificate Manager allows you to manage certificates for your user account, a service account, or the computer account. When you choose to manage your user account, the certificates managed can be used only by your account. So, the certificates are only relevant while you are logged in and using an application that requires a certificate from the certificate store. When you choose to manage certificates for a particular service account, the certificate is only relevant to that specific service, such as a virtual private network (VPN) service. When you choose to manage certificates for the computer account, these certificates are relevant only for the operating system, even if someone is not logged in. This configuration mode is commonly used when configuring a certificate for the Internet Information Services (IIS) web server.
The Local Users and Groups (lusrmgr.msc
) MMC snap‐in
allows for granular control over local user accounts and groups for the
Windows operating system. You can access the Local Users and Groups MMC
by right‐clicking the Start menu and choosing Computer Management. You
can also press the Windows key + R, type
lusrmgr.msc
, and press
Enter. When the tool launches, if you click Users you will see several
built‐in user accounts, all of which are disabled (depicted with the
down arrow), as shown in Figure 14.11. The only active account
on a brand‐new Windows 10/11 operating system is the first account
setup.
FIGURE 14.11 Local Users and Groups
The MMC extension is divided into the following two parts:
Performance Monitor (perfmon.msc
) varies a bit in
different versions of Windows, but it has the same purpose throughout:
to display performance counters, as shown in Figure
14.12. The tool collects counter information and then sends that
information to a console or event log.
FIGURE 14.12 Performance Monitor
Performance Monitor's objects and counters are very specific; you can use Performance Monitor as a general troubleshooting tool as well as a security troubleshooting tool. For instance, you can see where resources are being used and where the activity is coming from. In Exercise 14.2, you see how to work with Performance Monitor.
The Group Policy Editor (gpedit.msc
) tool allows you to
edit the local Group Policy for the operating system. Group
Policy is a mechanism that allows an administrator to set various
settings to customize the operating system. These settings can restrict
the operating system in a number of ways, such as removing the settings
tab from a built‐in application. Group Policy can also control aspects
of security for the operating system. Group Policy also contains a
mechanism to enforce these settings by reapplying the settings
periodically in the event they are changed.
The Group Policy Editor divides all settings between computer
settings and user settings. The computer settings affect the operating
system and how it behaves. The user settings affect the user logged onto
the operating system. If you open the Group Policy Editor by typing
gpedit.msc
in the Run
dialog box and pressing Enter, the tool will open and display a tree of
settings, as shown in Figure 14.13.
There is another way to open the Group Policy Editor in which you can
edit the Group Policy for a specific user, administrators, or all
non‐administrators. You must first start the MMC tool by typing
mmc
in the Run dialog box
and then pressing Enter. Then, add the snap‐in Group Policy Object
Editor. After it is selected, another dialog box will pop up where you
can browse to select a specific user or group or a special group of
users, as shown in Figure 14.14.
FIGURE 14.13 Group Policy Editor
FIGURE 14.14 Group Policy Editor browse dialog box
When the Microsoft Management Console (MMC) was first introduced with Windows 2000, it was to be a single pane of glass for monitoring and configuration of the Windows operating system. Over 20 years later, we still use a mixture of tools outside the MMC to monitor and configure Windows. Next, we discuss some of these additional utilities and tools.
The System Configuration (msconfig.exe
) tool allows you
to configure how Windows 10/11 starts up, as well as launching
additional tools. The tabs of the System Configuration tool differ a bit
based on the Windows version you are running. The main tabs are General,
Boot, Services, Startup, and Tools. Figure 14.15 shows the General tab for
Windows 10. From here, you can configure the startup options.
FIGURE 14.15 System Configuration General tab in Windows 10
Figure 14.16 shows the Boot tab for Windows 10. Note that from here, you can configure the next boot to be a safe boot, and you can turn on the boot information so that you can see drivers as they load—which is quite useful when a system keeps hanging during boot. Although the option is titled Safe Boot, this is classically referred to as booting into Safe mode.
Figure 14.17 shows the Services tab for Windows 10. On this tab, you can view the services installed on the system and their current status (running or stopped). You can also enable or disable services.
FIGURE 14.16 System Configuration Boot tab in Windows 10
FIGURE 14.17 System Configuration Services tab in Windows 10
In Windows 7 and earlier, the Startup tab allowed you to configure applications that start up when any user logs in. In Windows 8/8.1 and Windows 10/11, the Startup tab redirects you to the Startup tab in Task Manager, where these tasks can be performed, as shown in Figure 14.18.
FIGURE 14.18 System Configuration Startup tab and Task Manager Startup tab
Figure 14.19 shows the Tools tab for Windows 10. On this tab, you can launch a number of administrative tools to configure various Windows features.
FIGURE 14.19 System Configuration Tools tab in Windows 10
Keep in mind that the tabs differ slightly based on the operating system version. We walked through the CompTIA objectives related to this tool in this discussion.
The System Information (msinfo32.exe)
tool displays a
fairly thorough list of settings on the machine (see Figure
14.20). You cannot change any values here, but you can search,
export, and save the output. Several command‐line options can be used
when starting msinfo32
; Table 14.2 summarizes them.
FIGURE 14.20 The Msinfo32 interface shows configuration values for the system.
Option | Function |
---|---|
/computer |
Allows you to specify a remote computer on which to run the utility |
/nfo |
Creates a file and saves it with an .nfo
extension |
/report |
Creates a file and saves it with a .txt
extension |
TABLE 14.2
msinfo32
command‐line options
Resource Monitor (resmon.exe
) is used to identify
resource utilization of CPU, disk, network, and memory on Windows. The
utility was originally introduced in Windows Vista and has been included
with all new releases of Windows. The utility can be launched a few
different ways. The first is by entering the command
resmon.exe
in the Run
dialog box. Another way to open the Resource Monitor is from inside Task
Manager. At the lower left of the Performance tab is the option Open
Resource Monitor; clicking it will launch Resource Monitor in a new
window. Figure 14.21 shows Resource Monitor
and the four tabs for each resource.
FIGURE 14.21 Resource Monitor
The CPU tab allows you to identify the process with the highest amount of CPU utilization on the operating system. Each column can be sorted; it is as simple as clicking the CPU column to sort the column and find the process with the highest CPU utilization. When you do find the process, you can right‐click the entry and choose to end the process, suspend the process, and even search the process online.
The Memory tab displays detailed memory usage of the processes running on the operating system, as shown in Figure 14.22. This tab can help you identify memory in use, memory reserved by hardware, and memory available by the operating system and processes.
FIGURE 14.22 Resource Monitor Memory tab
The Disk tab helps you identify a process that is overusing the hard drive with a high amount of read requests, write requests, or overall usage. The Disk tab will also allow you to identify the I/O priority of processes and their response time. This tab is extremely useful when you suspect that a process is slowing down the system.
The Network tab displays all the processes that are currently utilizing the network. The processes can be sorted by send, received, and total bytes per second. The Network tab does a lot more than just displaying activity; it also shows the destination addresses for each process. This is valuable information if you suspect name resolution is a problem with the remote application. Opening the TCP Connections drop‐down, you can view the active TCP connections on the operating system, along with packet loss and latency. Normally this active view of network traffic can only be performed with a packet capture tool or another third‐party tool. The Network tab is not just useful for information on outgoing applications, it can also display processes listening on TCP and UDP ports. The Resource Monitor tool will also display firewall status for the processes listening on the operating system.
Storage space is finite in a computer, and it is inevitable that it
will be consumed over time. It can be consumed by a number of files,
such as the operating system, downloaded programs, temporary files,
updates, deleted files, and of course your data files. The Disk Cleanup
(cleanmgr.exe
) tool can clean up operating system files to
free up space without affecting your data files. Disk Cleanup, shown in
Figure
14.23, can be launched by entering the command
cleanmgr.exe
in the Run
dialog box, or by right‐clicking the C: drive, selecting Properties, and
then clicking Disk Cleanup.
FIGURE 14.23 Disk Cleanup
Once Disk Cleanup is launched, you can select the various files you want to clean up on the system drive. The categories you can delete are Downloaded Program Files, Temporary Internet Files, DirectX Shader Cache, Delivery Optimization Files, Recycle Bin, Temporary Files, Thumbnails, Windows Update Files, Windows Defender Antivirus, and countless other files. For some categories, you can view the files that you can potentially delete; two such categories are Temporary Internet Files and Downloaded Program Files. However, there are other categories in which you can view the individual files. After selecting the categories you want to delete, you can purge them from the disk, thus freeing up space.
Although, for the most part, Windows is functional from the time it is installed, Microsoft realized that if someone were going to use computers regularly, they would probably want to be able to customize their environment so that it would be better suited to their needs—or at least more fun to use. As a result, the Windows environment has a large number of utilities that are intended to give you control over the look and feel of the operating system.
This is, of course, an excellent idea. It is also a bit more freedom than some less‐than‐cautious users seem to be capable of handling. You will undoubtedly serve a number of customers who call you to restore their configuration after botched attempts at changing one setting or another.
More than likely, you will also have to reinstall Windows yourself a few times because of accidents that occur while you are studying or testing the system's limits. This is actually a good thing, because no competent computer technician can say that they have never had to reinstall because of an error. You can't really know how to fix Windows until you are experienced at breaking it. So, it is extremely important to experiment and find out what can be changed in the Windows environment, what results from those changes, and how to undo any unwanted results. To this end, we will examine the most common configuration utility in Windows: Control Panel, as shown in Table 14.3, which describes some popular applets. Also, not all applets are available in all versions.
Applet name | Function |
---|---|
Device Manager | Adds and configures new hardware |
Programs and Features | Changes, adds, or deletes software |
Administrative Tools | Performs administrative tasks on the computer |
Folder Options | Configures the look and feel of how folders are displayed in Windows File Explorer |
Internet Options | Sets a number of Internet connectivity options |
Network and Sharing Center | Sets options for connecting to other computers |
Power Options | Configures different power schemes to adjust power consumption |
Devices and Printers | Configures printer settings and print defaults |
System | Allows you to view and configure various system elements (discussed in more detail later in this chapter) |
Windows Defender Firewall | Configures basic firewall exemptions |
Configures the Outlook mail profile | |
Sound | Configures audio |
User Accounts | Configures local accounts on the operating system |
Indexing Options | Configures the folders to index for search capabilities |
Ease of Access | Allows for accessibility options for users |
TABLE 14.3 Selected Windows Control Panel applets
In the current version of Windows, when you first open Control Panel, it appears in Category view, as shown in Figure 14.24. Control Panel programs have been organized into various categories, and this view provides you with the categories from which you can choose. When you choose a category and pick a task, the appropriate Control Panel program opens. Or, you can select one of the Control Panel programs that is part of the category.
FIGURE 14.24 The Windows Control Panel in Category view
You can change this view to Classic view (or Small/Large Icons in Windows 10/11, Windows 8/8.1, and Windows 7), which displays all the Control Panel programs in a list, as in older versions of Windows. The specific wording of the CompTIA objective (1.4) for this exam reads, “Given a scenario, use the appropriate Microsoft Windows 10 Control Panel utility.” The items are organized by category and not by large icons in Windows by default. Therefore, we strongly suggest that administrators change to this view. To do so, select Large Icons in the View By drop‐down box in the right corner of Control Panel. Throughout this chapter, when we refer to accessing Control Panel programs, we will assume that you have changed the view to the Large Icons view.
For a quick look at how the Control Panel programs work, in Exercise
14.3, you'll examine some of the settings in the Date and Time
applet (timedate.cpl
).
The Date and Time applet is used to configure the system time, date,
and time zone settings, which can be important for files that require
accurate time stamps (or for users who don't wear a watch). Because Date
and Time is a simple program, it's a perfect example to use. Current
versions of Windows have an Internet Time Settings tab, which enables
you to synchronize time on the computer with an Internet time server
(the options in Windows 10/11 are shown in Figure
14.25). By default, the Internet time server is set to time.windows.com
.
FIGURE 14.25 Windows Date and Time/Internet Time Settings
You can configure regional settings through the Control Panel applet
Region in Windows 10/11 and 8/8.1. Using this applet
(intl.cpl
), you can choose which format is used for numbers
(see Figure 14.26), your geographic
location, and the language to be used for non‐Unicode programs. In
Windows 10/11, the Settings app is also used to set the layout of the
keyboard you are using as well as set the regional settings previously
mentioned.
FIGURE 14.26 Windows Region Control Panel applet
The ability to support so many languages is provided through the use of the Unicode standard. In Unicode, and the Unicode Character Set (UCS), each character has a 16‐bit value. This allows the same character to be interpreted/represented by 65,536 different entities.
If you click Additional Settings, you can go beyond the date and time formats and also configure number and currency, as shown in Figure 14.27.
FIGURE 14.27 Windows Additional Settings Region Control Panel applet
The Internet Options applet (inetcpl.cpl
) brings up the
Internet Properties dialog box, as shown in Figure
14.28. The tabs include General, Security, Privacy, Content,
Connections, Programs, and Advanced. Use this applet to configure the
browser environment for Internet Explorer 11 and specify such things as
the programs used to work with files found online.
FIGURE 14.28 Windows 10 Internet Options Control Panel applet
The File Explorer Options applet will open to the General tab, as shown in Figure 14.29. Using this tab, you can change the default opening pane for File Explorer. The Quick Access option, the default, displays frequently used folders and recent files. The This PC option displays the common folders found under This PC, along with devices and drives. In addition, you can control how folders open, such as opening folders in the same window or opening folders in their own new windows. You can change how items are opened, such as single‐click or double‐click. This tab also lets you change privacy settings, such as showing recently used file folders in the Quick Access view. By default, Windows 10/11 will be helpful by showing these recently used files and folders, but you may want to shut that behavior off. After doing so, you should clear the File Explorer history by clicking the Clear button in the General tab of the Folder Options applet.
FIGURE 14.29 Windows 10 File Explorer General Options
The View tab in the File Explorer Options applet allows you to change how files and folders are viewed in File Explorer. There are a number of settings on this tab that will allow you to change the way File Explorer is viewed. The settings range from always showing menus to showing all folders in the Navigation pane. One of the first settings that is usually changed is Hide Extensions For Known File Types, because seeing the extensions is really handy.
Some of the more important files that you will need to work on are hidden by default as a security precaution. To make certain folders or files visible, you need to change the display properties of Windows File Explorer, as shown in Figure 14.30. You learn how to do this in Exercise 14.4.
FIGURE 14.30 Windows 10 File Explorer View Options
The Search tab in the Folder Options applet controls the Search and Index feature in Windows 10/11. You can turn off using the search feature and change the behavior of non‐indexed locations.
The System applet in Control Panel is one of the most important
applets, and technically it's not an applet. In most recent versions of
Windows 10/11, the System applet in Control Panel will open the Settings
app to the About screen. From within the Settings app panel, you can
make a large number of configuration changes to a Windows machine. If
you click Advanced System Settings, the classic System Properties
(sysdm.cpl
) will open. (See Figure
14.31 for the Windows 10/11 classic System Properties applet.) You
can perform a number of functions in this applet, which can include some
of the following options:
FIGURE 14.31 Windows System Properties Control Panel applet
In the following sections, we will look more closely at the functionality of the tabs.
This tab is used to define whether the machine is in a workgroup or a domain environment. We talk more about networking in Chapter 15, “Windows Administration,” but in general terms, here's the difference between a workgroup and a domain:
This tab includes a number of tools that enable you to change how the hardware on your machine is used. The most useful is the ability to open the Device Manager directly from this tab. The other setting on this tab is how Windows behaves when you plug in devices. By default, Windows will automatically download drivers, apps, and custom icons from the devices plugged in. When you purchase a hardware device, odds are that it's been in the box for a while. By the time it gets made, packaged, stored, delivered to the store, stored again at the retailer, and then purchased by you, it's entirely likely that the company that made the device has updated the driver—even possibly a few times if there have been a lot of reported problems.
The Advanced tab has several subheadings, each of which can be configured separately, as shown in Figure 14.32. The following options are among those on this tab.
Although it is hidden in the backwaters of Windows’ system configuration settings, the Performance option holds some important settings that you may need to configure on a system. To access it, on the Advanced tab, click Settings in the Performance area.
In the Performance window, you can set the size of your virtual memory and how the system handles the allocation of processor time. In Windows, you also use Performance to configure visual effects for the GUI.
How resources are allocated to the processor is normally not something that you will need to modify. It is set by default to optimize the system for foreground applications, making the system most responsive to the user who is running programs. This is generally best, but it means that any applications (databases, network services, and so on) that are run by the system are given less time by the system.
FIGURE 14.32 Windows System Properties Advanced Tab
There are two types of environment variables (as shown in Figure 14.33), and you can access either one by clicking the Environment Variables button in the System Properties window.
FIGURE 14.33 Windows environment variables
In Windows, every user is automatically given a user profile when they log into the workstation. This profile contains information about their settings and preferences. Although it does not happen often, occasionally a user profile becomes corrupted or needs to be destroyed. Alternatively, if a particular profile is set up appropriately, you can copy it so that it is available for other users. To do either of these tasks, use the User Profiles settings to select the user profile with which you wish to work. You will be given three options, as shown in Figure 14.34.
FIGURE 14.34 Windows User Profiles Settings
The Windows Startup And Recovery options, shown in Figure 14.35, are relatively straightforward. They involve two areas: what to do during system startup and what to do in case of unexpected system shutdown.
FIGURE 14.35 Windows Startup And Recovery options
The System Protection tab lets you disable/enable and configure the System Restore feature, as shown in Figure 14.36. When System Restore is enabled on one or more drives, the operating system monitors the changes that you make on your drives. From time to time, it creates what is called a restore point. Then, if you have a system crash, it can restore your data back to the restore point. You can turn on System Restore for all drives on your system or for individual drives. Note that turning off System Restore on the system drive (the drive on which the OS is installed) automatically turns it off on all drives.
FIGURE 14.36 Windows System Protection Options
The Remote tab lets you enable or disable Remote Assistance and Remote Desktop, as shown in Figure 14.37. Remote Assistance permits people to access the system in response to requests issued by the local user using the Windows Remote Assistance tool. Remote Desktop permits people to log into the system at any time using the Remote Desktop Connection tool. This can help an administrator or other support person troubleshoot problems with the machine from a remote location.
FIGURE 14.37 Windows Remote options
Remote Assistance is enabled by default. It is handled at two levels. Having just Remote Assistance turned on allows the person connecting to view the computer's screen. To let that person take over the computer and be able to control the keyboard and mouse, click Advanced, and then, in the Remote Control section, click Allow This Computer To Be Controlled Remotely. You can also configure Remote Desktop here.
The User Accounts applet allows you to view and create accounts for the Windows operating system. You can change the account name that appears on the Welcome and Start screen. You can also change the account type by selecting the Standard or Administrator radio button. In Windows 10, the Settings app for Accounts allows you to change the picture that is displayed along with your username. In Windows 8/8.1 and Windows 7, the user picture can be selected from the User Accounts applet.
In addition to the management of user accounts, the User Accounts applet allows you to change the User Account Control (UAC) settings for the operating system, as shown in Figure 14.38.
FIGURE 14.38 Windows User Accounts applet
The Power Options applet (powercfg.cpl
) allows you to
choose a power plan of Balanced, Power Saver, or High Performance, as
shown in Figure 14.39. Each power plan dictates
when devices—namely, the display device and the computer—will turn off
or be put to sleep.
FIGURE 14.39 Windows Power Options applet
When you click Change Plan Settings, you can change how fast the display is turned off and how fast the computer is put to sleep, as shown in Figure 14.40.
FIGURE 14.40 Windows Edit Plan Settings
Clicking Change Advanced Power Settings allows you to configure a number of settings based on power, as shown in Figure 14.41. These settings include specifying when the hard drive turns off, turning off the wireless adapter, specifying Internet options for JavaScript Timer Frequency, and determining the system cooling policy. The applet allows you to tweak your power policy, and you can always restore the plan defaults.
FIGURE 14.41 Windows Advanced Power Settings
The power plan configured in Windows will interface with the Advanced Configuration and Power Interface (ACPI). The ACPI must be supported by the system BIOS/UEFI in order to work properly. However, most computer hardware made in the last decade will support the ACPI. The ACPI on the computer hardware provides the operating system with the necessary methods for controlling the hardware. This is in contrast to Advanced Power Management (APM), which gave only a limited amount of power to the operating system and let the BIOS do all the real work. Because of this, it is not uncommon to find legacy systems that can support APM but not ACPI.
There are four main states of power management common in most operating systems:
If you are interested in saving power with a system that is not accessed often, one option is to employ Wake on LAN (WoL). Wake on LAN is an Ethernet standard implemented via a card that allows a “sleeping” machine to awaken when it receives a wakeup signal. Wake on LAN cards have more problems than standard network cards. In our opinion, this is because they're always on. In some cases, you'll be unable to get the card working again unless you unplug the PC's power supply and reset the card.
Another power‐saving feature is to choose what happens when you close the lid of your laptop. This option is only available on laptop devices that have a lid closure sensor. The option can be found on the left‐hand side of the Power Options Control Panel applet, and it's labeled Choose What Closing The Lid Does. When the dialog box opens, you will see drop‐down menus for pressing the power button or sleep button and closing the lid, as shown in Figure 14.42. Depending on whether the laptop is plugged in or on battery power, you can choose a different option.
FIGURE 14.42 Windows Advanced Power System Settings
Windows Fast Startup is another advanced feature that was originally
introduced with Window 8 as Fast Boot. The feature allows the system to
hibernate during shutdown so that the system will appear to start up
more quickly. The Fast Startup feature will attempt to know when a cold
boot is required, such as when installing a program. If a cold boot is
required, you have two options. The first option is to turn Fast Startup
off in the Power Options System Settings, as shown in Figure 14.42. You can also use the
shutdown /s /t 0
command; the /s
switch will
shut down the system and the /t 0
switch will do it
immediately. The shutdown
command also allows for shutdown
of remote computers with the command
shutdown /m \\computername /s /t 0
.
The Universal Serial Bus (USB) selective suspend feature will allow the USB hub on the motherboard to suspend power to a device via the USB port. This is a handy feature to save battery power on a laptop if an external hard drive, a mouse, or some other device that requires power is not being used. However, this feature can also be problematic if you have a communication device connected via USB. You can turn this feature off on the Power Management tab of the device inside Device Manager, as shown in Figure 14.43.
FIGURE 14.43 Device Properties, Power Management tab
The Credential Manager applet allows you to manage stored credentials for applications such as the Internet Explorer and Microsoft Edge browsers, as well as the operating system itself. The Credential Manager service built into Microsoft operating systems stores username and password credentials in an encrypted database. The applet allows you to interact with the service to make changes to the stored credentials, as shown in Figure 14.44.
FIGURE 14.44 Windows Credential Manager applet
The Programs and Features applet (appwiz.cpl
) allows you
to view and uninstall desktop applications that are installed in
Windows, as shown in Figure 14.45. You can also see the
installed updates by clicking View Installed Updates on the left side of
the applet.
The Programs and Features applet also allows you to install and remove features in Windows. When you click Turn Windows Features On Or Off on the left side, a dialog box will appear with a list of OS features that can be selected or deselected. Features are different from desktop applications because they are an integral part of the operating system. Examples include Hyper‐V, Internet Explorer 11, and Windows PowerShell 2.0, just to name a few.
The Devices and Printers applet allows you to manage external devices such as external hard drives, printers, and webcams. From Control Panel, you can discover and add new devices that are connected to your network. Devices and Printers is shown in Figure 14.46.
FIGURE 14.45 Windows Programs and Features applet
FIGURE 14.46 Windows Devices and Printers applet
The Sound applet (mmsys.cpl
) allows you to view and
change the default playback and recording device for sound on the
system. By right‐clicking the device and choosing Properties, you can
view and modify the properties of a playback or recording device. The
changes will vary by the device and the vendor of the device—common
options are playback and recording levels and enhancements to the
playback and recording levels. In addition, the Sound applet enables you
to change the operating system's sound scheme, allowing you to change
the various sounds, as shown in Figure 14.47.
FIGURE 14.47 Windows Sound applet
The Troubleshooting applet does exactly what its name says it does—it allows troubleshooting of Programs, Hardware and Sound, Network and Internet, and System and Security, as shown in Figure 14.48. Troubleshooting has one notable feature: application compatibility troubleshooting. When this feature identifies any application that is problematic, the Troubleshooting wizard will appear and Windows will try to troubleshoot the application compatibility.
FIGURE 14.48 Windows Troubleshooting applet
The Network and Sharing Center applet allows you to view and change the active network connections for the operating system, as shown in Figure 14.49. The applet displays the current network profile that the computer has been placed into by the Windows Firewall service. From the main page of the applet, you can click Change Adapter Settings to see the classic network adapter view, which allows the adapters to be configured manually. From the main page of the applet, you can also click Change Advanced Sharing Settings on the left side. This allows you to change network discovery options for the network profile chosen as well as turn on or off file and printer sharing globally for the operating system.
The Device Manager applet (hdwwiz.cpl
) was first
introduced in Windows 95 and has hardly changed since its introduction
over 20 years ago. If the applet looks familiar, it is the same as the
MMC Device Manager (devmgmt.msc
). Device Manager allows you
to view and change hardware devices on the operating system, as shown in
Figure
14.50. This applet allows the administrator to load and update
third‐party drivers as well as drivers in the Windows Catalog. Device
Manager is usually the first place you should check if a hardware device
does not function after installation.
FIGURE 14.49 Windows Network and Sharing Center applet
FIGURE 14.50 Windows Device Manager
BitLocker drive encryption is a low‐level full‐disk encryption feature that can be controlled from the BitLocker Drive Encryption applet, as shown in Figure 14.51. The BitLocker Drive Encryption applet also allows you to turn on BitLocker drive encryption on removable media devices.
FIGURE 14.51 Windows BitLocker Drive Encryption applet
The Windows Firewall (firewall.cpl
), which in Windows 10
is named Windows Defender Firewall, is used to block access from the
network (be it internal or the Internet). While host‐based
firewalls are not as secure as other types of firewalls, this was a
great move in the right direction. It first appeared in Windows XP
Service Pack 2 but was released as a polished product in Windows
Vista.
Figure 14.52 shows the opening screen of Windows Defender Firewall in Windows 10/11. Windows Defender Firewall is turned on by default. It also blocks incoming traffic by default.
Clicking Advanced Settings on the left opens the Windows Defender Firewall with Advanced Security MMC, as shown in Figure 14.53. This MMC allows great control of the Windows Defender Firewall controls for inbound and outbound rules. Inbound rules can be created to allow and deny network traffic inbound to applications and the operating system. Outbound rules can be created to allow and deny network traffic leaving the operating system. Only inbound firewall rules are restricted by default. When an application attempts to listen to specific network traffic, the operating system will display a dialog box asking you to confirm the network activity. This dialog box will automatically create an inbound rule for the application's traffic. The Windows Defender Firewall with Advanced Security MMC allows you to pre‐create the rule. In addition, you can create connection security rules for authenticating and encrypting traffic.
FIGURE 14.52 Windows Defender Firewall in Windows 10
When Microsoft Outlook is installed in Windows, it is configured through the Mail applet. If you do not have Microsoft Outlook installed, the icon will simply not show up in Control Panel. The Mail applet is how you configure an email account, additional data files (OST files), RSS feeds, SharePoint lists, Internet calendars, published calendars, and address books, as shown in Figure 14.54. In the main dialog box you can even set up different profiles for different accounts. If more than one profile is configured, you will be prompted when Outlook launches to specify the profile you want to use.
Outlook will usually configure itself automatically by asking a series of questions when it first launches. The initial configuration will be stored and can be accessed by launching the Mail applet.
FIGURE 14.53 Windows Defender Firewall with Advanced Security in Windows
FIGURE 14.54 Outlook Mail applet
The Indexing service was introduced as a desktop search engine with Windows NT 4.0. Today the Indexing service is an integral part of Windows 10/11. It's an exceptional feature that is a requirement for today's volume of data. The Indexing service will systematically index files such as Microsoft Office documents, PDFs, text files, and many other files types. When searching for a word using File Explorer in a folder that is indexed, the Indexing service is queried directly, thus returning fast results. If the folder is not indexed, then the search process grinds through each file in the folder and produces results at a much slower pace.
You access the Indexing Options dialog box, shown in Figure 14.55, by using the Index Options applet in Control Panel. The default locations indexed are Internet Explorer History, Start Menu, and the Users folder (excluding AppData files). You can add locations to be indexed if you store files outside the normal Documents or Desktop locations that are contained inside the Users folder.
FIGURE 14.55 Indexing Options dialog box
By clicking Advanced in the Indexing Options dialog box, you open the dialog box shown in Figure 14.56. There you can choose to index encrypted files and to treat words with diacritics (accents) as different words. You can also rebuild the index in an attempt to fix missing documents from your search. This dialog box also allows you to relocate the index database. The File Types tab allows you to add various file types to index. It contains a very inclusive range of file types, but by default many are set to index just metadata on the file. Important file types like DOCX and PDF are set to index the contents.
FIGURE 14.56 Advanced Options for indexing
The Ease of Access Center applet contains various settings that make it easier to use Windows for motor and sensory impaired users, as shown in Figure 14.57. There is a wide range of tools, such as a magnifier, narrator, on‐screen keyboard, and a high‐contrast color scheme. In the many versions of Windows over the years, the accessibility tools have become expansive. Microsoft has continually added tools and settings to this applet to allow anyone with an impairment to use the operating system to its fullest.
FIGURE 14.57 Ease of Access Center applet
The Administrative Tools applet isn't really an applet at all; it is like a shortcut to various tools, as shown in Figure 14.58. These tools all have a common theme: administering the operating system. Many of these tools can be accessed in other ways, such as right‐clicking the Start button or using the Start menu and expanding Windows Administrative Tools.
In this array of tools, you can configure component services, clean up and defragment the disk, set up iSCSI connections, edit the local security policy, set up Open Database Connectivity (ODBC) connectors, create recovery media, schedule tasks, and perform memory diagnostics. These are just a few tasks that we haven't already covered.
FIGURE 14.58 Administrative Tools applet
The Windows Settings app first made its debut in Windows 8. It was Microsoft's attempt to make configuring Windows simpler for end users. Many of the configuration tasks formerly performed in Control Panel have been either duplicated in the Settings apps or replaced entirely. The appearance of the Settings app has created anxiety for both end users and administrators alike, because it's a change (albeit unwanted) from the Control Panel that has been around since Windows 95.
You can open the Settings app by clicking the Start menu and selecting the gear on the left‐hand side. The Settings app will open to the screen shown in Figure 14.59. Here, you can search for the setting you need, or you can choose from various categories. The search capability has been a welcomed feature, since every release of Windows introduces new settings.
FIGURE 14.59 Windows 10 Settings app
Many of the settings that are covered in this section have also been covered in the Control Panel section, as per the objectives of the CompTIA 220‐1102 exam. It is good to know both ways to access settings, since many of the settings in Control Panel have not been moved over to the Settings app. Likewise, many of the settings in the Settings app can only be found in the app, because they are entirely new settings.
The Time and Language screen in the Settings app allows you to change settings related to the date and time in Windows 10/11, as shown in Figure 14.60. Click Time & Language to change anything that you can change in Control Panel.
The Set Time Automatically switch is set by default and synchronizes
with Network Time Protocol (NTP) server of time.windows.com
. You
can choose to have Windows adjust the time zone automatically, or you
can manually change the time zone. This dialog box also allows for
customized calendars in the taskbar.
FIGURE 14.60 Date & Time settings
On the left side, under Time & Language, select Region. Changing the Country or Region setting allows Windows to deliver content relevant to the area where you reside. This dialog box also allows you to change the way values such as money, time, and date are formatted. If you select Language from the left side, you can set the Windows display language as well as the preferred language, as shown in Figure 14.61. In addition, you can set the keyboard language and speech language preferences.
The Update and Security category is where you can access all settings related to Windows updates and security, as shown in Figure 14.62. Windows 10 removed the option to control Windows updates from Control Panel and forces you to configure Windows updates in the Settings app.
FIGURE 14.61 Language settings
From this initial screen, you can check for Windows updates, as well as control downloads and installation, view the optional updates, pause updates, change your active hours, view update history, and set advanced options. Advanced options allow you to specify whether you want to receive updates for other Microsoft products, download updates over metered connections, determine how soon Windows is restarted after updates are applied, update notifications, and whether to pause updates until a certain date.
If you click Delivery Optimization on the Advanced Options screen for the Windows Update settings, you can change where Windows receives updates. By default, Windows will attempt to download updates from other PCs on the local network in order to conserve bandwidth. If the updates are not available, then Windows will download the updates from the Windows Update service.
FIGURE 14.62 Windows Update settings
The Windows Security option on the left side of the screen allows you to change a number of security‐related items, as shown in Figure 14.63. From this screen, you can view Virus & Threat Protection, Account Protection, Firewall & Network Protection, App & Browser Control, Device Security, Device Performance & Health, and Family Options. The Windows Security screen is a one‐stop shop for everything security‐related for the operating system.
In addition to Windows Updates and security‐related configuration. the Update & Security screen allows you to access several other options. The Backup section allows you to back up files to OneDrive, back up using File History, and open the prior Backup and Restore utility. The Backup screen is shown in Figure 14.64.
FIGURE 14.63 Windows Security settings
The Troubleshoot section allows you to specify how Windows will run the troubleshooting recommendations. Also, you can view troubleshooting history and run additional troubleshooters. The Recovery section allows you to reset the operating system back to the original state by using the Reset This PC option. You can also use advanced startup options by clicking Restart Now under Advanced Startup, as shown in Figure 14.65.
FIGURE 14.64 Windows Backup settings
The Activation section allows you to activate Windows 10/11 or change the product key. The Find My Device section lets you track your device, if you misplace or lose the device. For Developers allows you to change how the operating system behaves, such as allowing apps to be installed from files, device discovery, and other developer‐friendly settings for File Explorer, as shown in Figure 14.66.
The last section in the Update & Security screen is the Windows Insider Program. Here you can enroll in the Windows Insider Program to get the most advanced set of features that Microsoft is developing for Windows.
FIGURE 14.65 Windows Recovery settings
The classic Display applet has been removed from the Windows 10 Control Panel. You can now change display settings in the Personalization screen, shown in Figure 14.67. You can configure the background, formerly known as the wallpaper.
The Colors section allows you to change the colors for the Windows controls and application controls. Lock Screen allows you to configure how the lock screen looks and what is displayed on the screen when it is locked. By default, the background is set to Windows Spotlight. The Spotlight feature downloads pictures from Bing and displays them on the lock screen. You can also configure which applications will display their status on the lock screen. As with prior versions of Windows, the theme can be changed using the Themes section. Changing the theme will change the background, colors, sounds, and mouse cursor. You can even download more themes from the Microsoft Store. The Fonts section allows you to view all the installed fonts on Windows, as well as install new fonts by dragging and dropping them. The Fonts section also contains a link to open the Microsoft Store so you can download additional fonts. The Start section allows you to personalize the Start menu, as shown in Figure 14.68. You can change a number of settings, such as displaying the app list in the Start menu, showing recently added apps, and showing the suggestions, just to name a few settings.
FIGURE 14.66 Windows For Developers settings
The Taskbar section allows you to change a number of settings. You can lock the taskbar from changes, automatically hide the taskbar, use small taskbar buttons, turn on Peek to preview when the mouse cursor hovers over an application, change the orientation of the taskbar on the screen, and a number of other settings.
FIGURE 14.67 Windows 10 Personalization settings
The Apps section will eventually replace the Program and Features Control Panel, since it performs many of the same functions. This section opens to the Apps & Features subsections, as shown in Figure 14.69. This section will allow you to change the source of apps in relation to the Microsoft Store. You can also uninstall apps by right‐clicking the app and selecting Uninstall. By clicking Optional Features, you can choose to uninstall a feature or add a new one.
FIGURE 14.68 Windows 10 Start settings
FIGURE 14.69 Windows 10 Apps & Features settings
In the Default Apps section, you can select the default app for email, maps, music, photos, videos, and websites. You also have the option to set the default app by the file type of the file or the network protocol being used. Offline Maps allows you to download maps, as well as update the already downloaded maps. Use the Apps For Websites section to associate an app with a website. Doing so causes the app to “spring into action” when a website is visited, making it a seamless experience. The Video Playback section allows you to tweak how a video will look when played back on Windows. Use the Startup section to specify which applications start in the background, as shown in Figure 14.70.
FIGURE 14.70 Windows 10 Start settings
The Privacy section allows you to control all your privacy concerns with Windows 10/11. The opening section, General, is shown in Figure 14.71. Here you can control your advertising ID, how websites make content decisions for you, the tracking of app launches, and suggested content settings.
FIGURE 14.71 Windows 10 General Privacy Settings
The Speech section allows you to control how your voice is used for speech recognition with Microsoft's online speech recognition technology. Inking & Typing Personalization allows you to control whether your handwriting is used to build a personal dictionary of words. In Diagnostic & Feedback, you control how your personal information is used to provide diagnostic and feedback statistics to Microsoft. The Activity History section lets you control if your activity is stored and tracked on the device. This setting is useful if you want to view recent documents; press the Windows key + Tab and Windows will show you all your previous documents. You can clear history and turn off all tracking.
In addition to the aforementioned Windows permission settings, you can view and control App permissions. The various permissions that can be viewed and controlled are as follows: Location, Camera, Microphone, Voice Activation, Notifications, Account Info, Contacts, Calendar, Phone Calls, Call History, Email, Tasks, Messaging, Radios, Other Devices, Background Apps, App Diagnostics, Automatic File Downloads, Documents, Pictures, Videos, and File System. This is quite an exhaustive list, and you can review and control each of these permissions for a particular application.
The System section allows you to change a multitude of settings that pertain to the operating system, as shown in Figure 14.72. In the Display section, you can arrange your monitors, if you have more than one. You can also change how the additional monitors operate, such as extending or duplicating your desktop. You can also turn on the feature called Night Light that restricts the blue light the display normally emits. The Display section also allows you to tune the Windows high dynamic range (HDR) of colors on your display. The most important settings are probably the display resolution and Scale And Layout settings, which allow you to get the most out of your display.
FIGURE 14.72 Windows 10 System settings
In the Sound section, you select your output and input devices, as shown in Figure 14.73. You can also click the Troubleshoot button to help you identify sound issues. The Sound section is similar to the Sounds Control Panel applet, because it allows you to change the sound devices and control volume levels.
FIGURE 14.73 Windows 10 Sound settings
The Notifications & Actions section allows you to change the way the operating system notifications behave. You can control all operating system notifications, change lock screen notifications, control reminder and incoming VoIP calls on the lock screen, and specify whether notifications play sounds, among other settings. The Focus Assist section allows you to control which notifications come to your attention and when they notify you; you can, for example, choose to suppress notifications when you are playing a video game. The Power & Sleep section is identical to the Power Control Panel applet. Here you can change when the screen turns off and when the operating system enters sleep mode. The Storage section provides a graphical overview of space used on the local disk, as shown in Figure 14.74. Clicking each category of storage brings up a different view of the storage. For example, Apps & Features displays all the applications you can uninstall on the operating system, and Temporary Files displays all the various temporary files on the operating system (you can then choose to remove them). A feature called Storage Sense can be turned on, which automatically frees up space on the local disk by removing unneeded files.
FIGURE 14.74 Windows 10 Storage settings
The Tablet section lets you control how the device performs when you remove the keyboard and convert it to a tablet. Use the Multitasking section to control Snap Assist, which is how an application or window snaps into a corner of the screen. You can also change the way the Alt + Tab keys display applications. In addition, you can configure how virtual desktops are used in Windows 10/11. The Projecting To This PC section allows you to control how other devices project their displays to Windows 10/11. The protocol used is called Miracast, which is a technology that allows screen sharing across devices. The Shared Experiences section allows you to control how apps are shared across multiple devices. You can start a task on one device and finish it on another device, if you are logged into both devices and have the feature turned on. The Clipboard section allows you to control how the clipboard operates. You can turn on features like Clipboard History, which enables you to have multiple items in your clipboard. You can even sync clipboards across multiple devices. Use the Remote Desktop section to enable and disable the Remote Desktop feature, which allows you to connect remotely to the PC. The last section, About, allows you to view information about the PC and rename it, if you want.
The Devices section allows you to view, control, and configure all devices connected to the PC. This section will eventually replace the Devices and Printers Control Panel applet. The opening screen of Bluetooth & Other Devices allows you to view and configure devices that are directly connected to the system, connected via Bluetooth, or connected via another wireless technology, as shown in Figure 14.75. On this screen, you can add a Bluetooth device with a pairing process.
FIGURE 14.75 Windows 10 Devices settings
The Printers & Scanners section allows you to view and configure all of the installed printers and imaging devices on the operating system. You can allow Windows to manage the default printer, and Windows will select the most recently used printer as your default. You also have the option to add additional printers or scanners. The Mouse section lets you change how the mouse behaves on the operating system. You can change settings like which button on the mouse is your primary button, your cursor speed, how the scroll wheel advances, and other mouse‐related settings. Use the Typing section to control whether spell check is enabled and whether suggestions are turned on as you type. The Pen & Windows Ink section contains settings related to handwriting, such as the font to use when converting handwriting. The AutoPlay section allows you to control how AutoPlay works for media and devices connected to Windows. You can choose a default action for removable drives or memory cards inserted. The USB section allows you to control notifications if a USB device is not working correctly or has issues.
Use the Network & Internet section to view and control network settings for Windows, as well as Internet settings. The opening screen is the Status screen, and it displays the current network status for the Network connection, as shown in Figure 14.76. On this screen you can click Properties and change various properties of the network connection, such as the public or private firewall posture, metered connection setting, and IP addressing. A newer feature lets you view the data usage for each application, which makes it easy to find an application that uses a lot of bandwidth.
FIGURE 14.76 Windows 10 Network & Internet settings
In addition to viewing and changing basic properties for the network connection, you can open the traditional view of network adapters, access the Network and Sharing Center, and open the Network Troubleshooter. The Ethernet section allows you to open the traditional view of network adapters as well. Use this section to configure advanced sharing options, such as network discovery and file and printer sharing. This section also lets you specify a shortcut to open the Network and Sharing Center as well as the Window Firewall.
Although it's unlikely you have a dial‐up connection, the Network & Internet section includes a screen for configuring dial‐up connections. This section, just like the Ethernet section, provides a way to open the traditional view of network adapters, access the Network and Sharing Center, and turn on Windows Firewall.
Use the VPN (virtual private network) section to view and configure settings for VPN connections, as shown in Figure 14.77. You can add a VPN connection or change advanced options, such as allowing VPN over metered networks or allowing VPN connections while roaming if a cellular modem is being used. The same shortcuts to adapter settings, Advanced Sharing Options, Network and Sharing Center, and the Windows Firewall are also available.
FIGURE 14.77 Windows 10 VPN settings
The last section is Proxy, where you can configure a proxy for the currently logged‐on user, as shown in Figure 14.78. Here you set the proxy that Internet Explorer 11 and Microsoft Edge will use. Other applications that use the common Microsoft web controls will also use the proxy server. By default, Automatically Detect Settings is enabled, but you can elect to use a setup script instead. It is a common task to manually set a proxy server, which will be in the form of an IP address or fully qualified domain name (FQDN), along with the port and a bypass list of addresses.
FIGURE 14.78 Windows 10 Proxy settings
The Gaming section was originally introduced with Windows 8 to create a seamless interface between the Xbox platform and the PC. Today, it has evolved into a very rich feature, and here you control how gaming is performed on Windows 10/11.
The Xbox Game Bar section is the opening screen, shown in Figure 14.79. In this section, you can control whether the game bar is active during a game and how it launches. You can change any of the shortcut keys related to the Xbox Game Bar.
FIGURE 14.79 Windows 10 Gaming settings
The most important key sequence is the Windows + G key, which launches the Xbox Game Bar, as shown in Figure 14.80. However, if you have an Xbox controller, the Xbox key will launch the game bar.
The Captures section allows you to configure where screenshots and recorded captures are saved. You can manage all aspects of the capture in this section, such as recording length, recording audio, audio quality, microphone and system volume levels, recorded frames per second, and overall video quality. Use the Game Mode section to control the game mode, which turns off Windows updates so they don't interrupt gameplay. You can also adjust the quality of gameplay to deliver the best frame rate, and manually change the Graphics settings for performance of either desktop apps or Microsoft Store apps.
The last section in the Gaming setting is the Xbox Networking section, which helps an Xbox Live player diagnose problems with gameplay and networking. This section automatically checks Internet connectivity, Xbox Live services, your latency to these services, and packet loss. It displays the latency, packet loss, the type of NAT your router is using, and local server connectivity. The type of NAT and local service connectivity setting affect others wishing to connect to your computer for multiplayer games.
FIGURE 14.80 Windows 10 Xbox Game Bar
The Accounts section allows you to view and configure all the settings for your user account, as well as other accounts on the operating system. The default screen is the Your Info screen, and it will display all of the information about your account, such as name, email address, and account type, as shown in Figure 14.81. You also have the option of managing your Microsoft account online.
The Email & Accounts section enables you to add an email account that is used for email, calendar, and contact information. You can also change accounts used by other apps, such as the Microsoft Store app, that require a login. This section also allows you to change the default apps associated with files and actions, such as viewing a movie, listening to music, or browsing the web.
Use Sign‐in Options to change the way you log into Windows. The Windows Hello feature is configured on this screen, as shown in Figure 14.82. Windows Hello allows you to substitute your face, fingerprint, PIN, security key, or picture password for your actual password. The Hello feature works by storing your real credentials, such as your username and password, in Credential Manager. Credential Manager is then locked with this process. When you attempt to log in with a picture of your face, Windows Hello will unlock the credentials stored in Credential Manager and pass the actual username and password to the operating system. Dynamic Lock is another feature that can be configured in this section. Dynamic Lock will dynamically lock your computer when you walk away with a device that is paired to the laptop, such as a mobile device.
FIGURE 14.81 Windows 10 Accounts, Your Info
FIGURE 14.82 Windows 10 Hello
The Access Work Or School section is used to connect the operating system with a corporate or school account. These accounts usually contain mobile device management (MDM) settings. This passes some or all of the control of the operating system to the organization responsible for the account. The enrollment of the operating system into an MDM system can be performed with a provisioning package to help simplify the enrollment process. You can also export management log files for analysis if something is not functioning properly with the MDM control. You can set up an account for test taking, which locks the operating system down when it is logged into.
The Family & Other Users section allows you to add family member accounts. You can then limit time, apps, appropriate websites, and games. In addition, you can add others to log into the operating system who are not controlled via your family group. Windows 10/11 also has the ability to be set up as a kiosk. You launch a wizard that will create a local account, and you can then choose a kiosk app. When the kiosk mode is enabled, the operating system will boot up, automatically login as the local user created, and the configured app with run.
The last section, Sync Your Settings, allows you to choose what is synced from one Windows system to another Windows system. You can sync your theme, passwords, language preferences, and other Windows settings.
Windows configuration information is stored in a special configuration database known as the Registry. This centralized database contains environmental settings for various Windows programs. It also contains registration information that details which types of filename extensions are associated with which applications. So, when you double‐click a file in Windows File Explorer, the associated application runs and opens the file that you double‐clicked.
The Registry was introduced with Windows 95. Most operating systems up until Windows 95 were configured through text files, which could be edited with almost any text editor. However, the Registry database is contained in a special binary file that can be edited only with the Registry Editor provided with Windows.
The Registry is broken down into a series of separate areas called
hives. The keys in each hive are divided into two basic
sections—user settings and computer settings. In Windows, a number of
files are created corresponding to each of the different hives. The
names of most of these files do not have extensions, and their names are
SYSTEM
, SOFTWARE
, SECURITY
,
SAM
, and DEFAULT
. One additional file, whose
name does have an extension, is NTUSER.DAT
.
The basic hives of the Registry are as follows:
HKEY_CLASSES_ROOT
Includes
information about which filename extensions map to particular
applications.HKEY_CURRENT_USER
Holds
all configuration information specific to a particular user, such as
their desktop settings and history information.HKEY_LOCAL_MACHINE
Includes
nearly all configuration information about the actual computer hardware
and software.HKEY_USERS
Includes
information about all users who have logged into the system. The
HKEY_CURRENT_USER
hive is actually a subkey of this
hive.HKEY_CURRENT_CONFIG
Provides
quick access to a number of commonly needed keys that are otherwise
buried deep in the HKEY_LOCAL_MACHINE
structure.If you need to modify the Registry, you can modify the values in the database or create new entries or keys. You will find the options for adding a new element to the Registry on the Edit menu. To edit an existing value, double‐click the entry and modify it as needed. You need administrator‐level access to modify the Registry.
Windows stores Registry information in several files on the hard
drive. In Windows 7 and earlier, you could restore this information
using the Last Known Good Configuration option on the F8 start menu.
With later versions, you would have to restore the files from a backup
for the systemroot\repair
directory by using the Windows
Backup program. Repairing the Registry from a backup overwrote the
Registry files in systemroot\system32\config
.
Remember that a system restore will restore the Registry to the state it was in when a restore point was saved. As a very last‐resort option for system recovery, Windows uses the Windows Recovery Environment (WinRE) to do a complete PC reset. It is your goal to make sure that you never need to use this.
Where there are files, there are disks. That is, all the files and programs that we've talked about so far reside on disks. Disks are physical storage devices, and they also need to be managed. There are several aspects to disk management. One is concerned with getting disks ready to be able to store files and programs; another deals with backing up your data; and yet another involves checking the health of disks and optimizing their performance. We'll look at these aspects in more detail.
In order for a hard disk to be able to hold files and programs, it has to be partitioned and formatted. Partitioning is the process of creating logical divisions on a hard drive. A hard drive can have one or more partitions. Formatting is the process of creating and configuring a file allocation table (FAT) and creating the root directory. The New Technology Filesystem (NTFS) is available with all the versions of Windows you need to know about for the exam, but others are also recognized and supported. The file table for the NTFS is called the Master File Table (MFT).
The following is a list of the major filesystems that are, or have been, used and the differences among them:
pagefile.sys
) in Windows, except that they are their own
partition type. They are used for virtual memory when the
physical memory is exhausted.When you're installing Windows 10/11, the installer defaults to NTFS. However, you can use the partitioning tool to format the partition with FAT32. It's really unnecessary, since NTFS is a much better filesystem with respect to stability, avoiding corruption, and security. When you are formatting a storage drive, you can format it with FAT or NTFS. Storage devices formatted in FAT can be read by other operating systems, but when they are formatted in NTFS the operating system must explicitly support NTFS.
To format a partition from the command line, use the
format
command, which is available with all versions of
Windows. You can run format
from a command prompt or by
right‐clicking a drive in Windows File Explorer and selecting Format.
However, when you install Windows, it performs the process of
partitioning and formatting for you if a partitioned and formatted drive
does not already exist.
You can usually choose between a quick format and a full format:
In Windows, you can manage hard drives through the Disk Management
console. To access Disk Management in Windows 10, right‐click the Start
menu and then click Disk Management. Alternatively you could start the
Disk Management MMC by typing
diskmgmt.msc
in the Run
dialog box.
The Disk Management screen lets you view a lot of information about all the drives installed in your system, including CD/DVD‐ROM drives (see Figure 14.83).
Given a scenario, there are many instances in which an administrator would turn to Disk Management during the course of trying to find the right storage solution. The Disk Management administrative console allows you to review all the logical partitions (volumes) configured and physical drives (disks) that are connected to the computer.
The logical volume is displayed in the upper portion of the Disk Management console. This view allows you to see at a glance all the configured volumes on the operating system. Details about the volume letter or type, the layout type, the type of disk (basic vs. dynamic), filesystem, status, capacity, and free space can be seen in this logical view.
The physical layout, displayed below the logical view, allows you to see the layout of the physical disks in a graphical format. Using both views, you can see the layout of a physical disk and its corresponding logical layout. The physical layout allows you to view the status of the drive, the size of the drive, and its health, in addition to the layout of volumes on the physical disk.
FIGURE 14.83 Disk Management MMC in Windows 10
Windows supports three types of partition styles: Basic, GPT, and Dynamic. Partition styles are also known as partition schemes; the terms are used to describe the underlying structure of the partitioning of the physical disk.
Right‐clicking any volume opens a context menu that allows you to change the drive letter or paths, format, extend, shrink, delete, or add a mirror. Right‐clicking any drive opens a context menu that allows you to create a new spanned, striped, mirrored, or RAID‐5 volume. You can also convert to a dynamic or GPT disk.
Let's discuss the features and functions in further detail:
Storage Spaces This Windows feature was initially introduced in Windows Server 2012 and Windows 8. It allows for a group of drives to be placed into a pool of storage that can be configured for fault tolerance. The benefit is that more disks can be added later to extend a storage pool dynamically. The one caveat is that the hard drives can either be managed by the Disk Management MMC or Storage Spaces, but not both.
The Storage Spaces feature lets you manage a variety of hard drives. A unique feature is its capability to create a storage pool using two or more external (USB‐attached) hard drives. Storage Spaces is not exclusive to external hard drives; internal hard drives can be used as well.
In Storage Spaces, you can create a two‐way mirror, a three‐way mirror, or parity resiliency of your data. A two‐way mirror is identical in functionality to a RAID‐1 mirror. A three‐way mirror is similar to a RAID‐1 mirror in that it duplicates the data on two other drives; unlike RAID‐1, however, you can lose two hard drives and retain data. A parity resiliency type is identical in functionality to a RAID‐5 (striping with parity). You can also use Storage Spaces without resiliency, which is called a simple volume, as shown in Figure 14.84.
FIGURE 14.84 Windows 10 Storage Spaces
As time goes on, it's important to check the health of Windows computers’ hard disks and to optimize their performance. Windows provides you with several tools to do so, some of which we've already mentioned in this chapter. One important tool is Disk Defragmenter, which has existed in almost all versions of Windows.
When files are written to a hard drive, they're not always written contiguously, or with all the data in a single location. Files are stored on the disk in numbered blocks, similar to PO boxes. When they are written, they are written to free blocks. As a result, file data is spread out over the disk, and the time it takes to retrieve files from the disk increases. Defragmenting a disk involves analyzing the disk and then consolidating fragmented files and folders so that they occupy a contiguous space (consecutive blocks). This increases performance during file retrieval, since the hard drive arm in mechanical hard disks needs to travel less to retrieve the blocks. Defragmentation of the filesystem is not required on solid‐state drives (SSDs), since the fragmentation of data blocks does not slow down the retrieval of the memory locations.
To access Disk Defragmenter, follow these steps:
If defragmentation is recommended, click Defragment.
Be aware that for large disks with a lot of fragmented files, this process can take quite some time to finish.
In this chapter, you learned about some of the tools that can be used with Windows. We covered basic Windows management concepts, including managing disks, using filesystems, and understanding directory structure. Keeping your computer healthy will save you a lot of stress.
With the basic knowledge gained in this chapter, you are now ready to learn how to interact with the most popular operating systems in use today. These topics are covered in the next four chapters.
The answers to the chapter review questions can be found in Appendix A.
taskmgr
kill
shutdown
netstat
eventviewer.exe
eventvwr.msc
lusrmgr.msc
devmgmt.msc
.des
extension and you want to be able to search each
file's metadata; which applet should this be configured in?
HKEY_CURRENT_MACHINE
HKEY_LOCAL_MACHINE
HKEY_MACHINE
HKEY_RESOURCES
You will encounter performance‐based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors’, refer to Appendix B.
You are working at a company that has standardized on Windows 10 workstations for all. The phone rings, and it is your supervisor. He tells you that his workstation is running incredibly slowly, almost to the point where is it is unusable. When you ask what he is running, he reports that he has exited out of everything but the operating system. You suspect there are background processes tying up the CPU and memory. Which utility can you have him use to look for such culprits?
FIGURE 14.85 Windows Task Manager
THE FOLLOWING COMPTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
The previous chapter introduced the basic components of Windows operating systems and discussed the various tools used to configure Windows. This chapter builds on the previous chapters and focuses on Windows administration. In this chapter you will learn how to install and upgrade Windows 10/11 and learn various command‐line tools. We'll also explore advanced concepts, such as Microsoft networking and design. All the content is generic to the Windows operating systems you'll be tested on during the 220–1102 exam. The content can be applied to any of the previous Windows operating systems.
Windows 10, released on July 29, 2015, was the successor to Windows 8.1. Actually, for many people who never upgraded from Windows 7, it was their direct upgrade to Windows 10. The user interface (UI) looked to be the same, and it was a well‐needed upgrade. Microsoft also made the upgrade notification so irritating that many were forced to upgrade just to escape the notifications.
Regardless of the motivation to upgrade to Windows 10, its adoption was successful because of its meager system requirements. The hardware requirements for Windows 10 are identical to prior versions of Windows. Starting with Windows 11 requirements have been scaled up for performance, as shown in Table 15.1. The meager hardware requirements were one of the driving factors for many to just hit the upgrade button.
Component | Windows 10 Requirement | Windows 11 Requirement |
---|---|---|
Processor | 1 GHz or faster | 1 GHz or faster (2 or more cores) 64‐bit |
RAM | 1 GB (32‐bit) or 2 GB (64‐bit) | 4 GB (64‐bit) |
Hard drive space | 16 GB (32‐bit) or 32 GB (64‐bit) | 64 GB (64‐bit) |
Graphics card | DirectX 9 with WDDM 1.0 or higher driver | DirectX 12 with WDDM 2.0 driver |
TABLE 15.1 Windows 10/11 system requirements
If you are planning to install Windows 10 on a computer, it must meet
the specifications in Table 15.1,
which are pretty easy to meet or exceed. However, sometimes you'll find
that your organization's computers are older than you think. Windows 11
requires a realistic amount of computing power, RAM, and storage. In
addition to these requirements, a UEFI firmware that is Secure Boot
capable and a TPM 2.0 is required by Windows 11. To learn more about
Windows 10 and Windows 11 system requirements, visit https://docs.microsoft.com/en-us/windows-hardware/design/minimum/minimum-hardware-requirements-overview
.
Sometimes you need to deploy Windows 10/11 to a group of computers.
There are several tools that can be used to collect information about
the hardware of the current operating system. The easiest tool to use is
the System Information utility. To access this utility, you simply log
in and select System Information from the Start menu. You can use System
Information to investigate the processor, RAM, hard drive, and video
card, as well as specifics about each of the peripherals connected (see
Figure
15.1). You can also access System Information by pressing Windows
key + R, typing
msinfo32.exe
, and
clicking OK.
You can also use System Information remotely. Select View and then
choose Remote Computer from the drop‐down menu and select the remote
computer. You can also access this utility from the command line by
typing msinfo32.exe
. The
msinfo32.exe
command allows remote collection of
information to a flat text file using the following command:
msinfo32.exe /computer computername /report c:\report.txt
The msinfo32.exe
command can also be used to output the
information to its native format using the following command:
msinfo32.exe /computer computername /nfo c:\report.nfo
FIGURE 15.1 System Information
However, this NFO file will need to be opened with the System Information utility, which might be difficult if there are more than 10 or so machines.
An alternative to the System Information (msinfo32.exe
)
utility is the Microsoft Assessment and Planning Toolkit (MAP), which
can be downloaded from the Microsoft Download Center (www.microsoft.com/downloads
).
The MAP Toolkit allows for the automated inventory collection of
hardware and software from the current operating systems. The MAP
Toolkit can then produce a report from the collection of data and
provide the administrator with the readiness of current hardware for
Windows 10/11. The MAP Toolkit requires the installation of a SQL
database, which is included in the installation of the MAP Toolkit. The
MAP Toolkit requires a dual‐core 1.5 GHz processor, 2 GB of RAM, and a
minimum of 1 GB of free hard drive space.
Windows 10/11 can be installed as an upgrade or as a clean
installation. When you choose Custom, you can decide whether or not to
format the hard disk. If you choose not to format the hard disk, the old
operating system is placed in a folder called WINDOWS.OLD
.
When you choose to format the hard disk, it will erase your files,
programs, and settings.
When installing Windows 10, you have the option to install it on Basic Input/Output System (BIOS)‐based hardware or Unified Extensible Firmware Interface (UEFI)‐based hardware. When installing Windows 11, you must install it on UEFI‐based hardware. The hardware must support the newer standard of UEFI to install it in this fashion. UEFI hardware provides a feature called Secure Boot. Secure Boot operates by checking the signatures of the hardware, including the UEFI drivers (also called option ROMs), EFI applications, and, finally, the operating system. If the signatures are verified, the operating system is then given control of the boot process and the hardware. Windows 11 also requires that the hardware is Secure Boot–capable and a TPM 2.0 module is installed.
For a UEFI installation, the partitioning of the drive will be laid out like Figure 15.2. The Recovery partition holds a bootable copy of the Windows Recovery Environment (WinRE) and is roughly 500 MB in size. The EFI System Partition (ESP) is a System partition used to hold the Boot Configuration Data (BCD) for the booting of the boot partition containing the Windows kernel.
FIGURE 15.2 Windows default disk layout
The installation of Windows 11 is almost identical to the installation of Windows 10, Windows 8/8.1, and Windows 7. For that matter, it is similar to most operating systems, such as macOS or Linux. There are several common elements during the setup process that must be addressed, such as locale and where to install the operating system.
The installation of Windows 11 can be performed from a Windows 11 installation DVD‐ROM. However, optical media such as DVD‐ROM is rarely used because most laptops and tower computers no long include optical drives. Universal Serial Bus (USB) installation media is a preferred method with most computer vendors. The installation media is created with the Windows 11 Media Creation Tool, which can be downloaded from:
www.microsoft.com/software-download/windows11
If you are performing an upgrade from within another operating system
and the installation does not begin immediately, look for the
setup.exe
file and run it. The following example shows the
setup of Windows 11 with the latest installer (21H2) and a clean
installation:
In the Windows Setup dialog box, select the language for the installation to continue in as well as the format of the keyboard or input method; these are often referred to as the locale.
Once the locale is set, a dialog box will present you with three options to proceed, as shown in Figure 15.4:
FIGURE 15.3 Windows Setup dialog box
FIGURE 15.4 Windows setup options
Click Install Now.
You will be asked which version of the operating system you want to install, as shown in Figure 15.5.
FIGURE 15.5 Windows edition selection
Select to install only the operating system for which you have a valid license key for activation, since you cannot change editions later without a complete reinstallation of the operating system.
After selecting the edition, you will be prompted with the end‐user license agreement (EULA), as shown in Figure 15.6.
FIGURE 15.6 Windows end‐user license agreement
To continue, check the “I accept the Microsoft Software License Terms” check box, and then click Next.
The next screen asks you which type of installation you want. There are two options, as shown in Figure 15.7.
C:\WINDOWS.OLD
folder. You have 30
days to roll back to the prior operating system if you do not want to
keep Windows 11.
FIGURE 15.7 Windows installation options
Choose Custom: Install Windows Only (Advanced). This is also known as a clean installation, because it will format the installation drive.
The next screen asks where you want to install the new operating system. You can delete, create, and extend partitions. However, most of the time if a partition exists and you are not upgrading, then deleting the existing partitions is the most common task. In rare instances where no drives show up, you may need to install custom drivers specified by the vendor. You can perform that task from this screen as well by selecting Load Driver, as shown in Figure 15.8.
FIGURE 15.8 Windows installation partitioning
Select the drive for the installation of Windows 11, and then click Next.
Once you have gone past this point, there is no going back. The Windows installation will begin, as shown in Figure 15.9.
This step is where the filesystem is formatted, the boot files are copied, and the operating system's files are applied to the disk. After this stage completes, the files are on the hard drive but the operating system is not generalized to your computer; that happens in the next boot. After the computer reboots, it will detect hardware and run through what is called a generalize pass. The screen showing that device drivers are being detected (along with a percentage) will flash by, and then you'll see a Getting Ready screen, as shown in Figure 15.10. This is the point where the operating system is adjusting itself to the computer hardware.
FIGURE 15.9 Windows installation progress
FIGURE 15.10 Windows Getting Ready screen
Once the drivers are detected, the operating system will reboot again. During this boot, the drivers detected in the prior stage will be instantiated and the specialize pass will begin. In this pass, the locale (region) of the operating system will be chosen, as well as the keyboard layout. You can even choose a second keyboard layout, if you're switching out a keyboard.
FIGURE 15.11 Operating system locale (region) setting
Confirm the keyboard layout by clicking the Yes button, as shown in Figure 15.12. It will also ask you if you want to add a second keyboard layout; just click Skip.
The Windows installation will check for updates after the selection of the locale, keyboard layout, and additional keyboard layout question. The operating system will do some background work and the screen will assure you that something is going on in the background, as it changes with its message.
FIGURE 15.12 Operating system keyboard layout setting
Windows 11 allows you to name your device during the setup, as shown in Figure 15.13. This different from older operating systems, such as Windows 10 and Windows 8/8.1, where a random name was created for you.
The operating system will reboot after you confirm the name of the system. You will then see a Just a Moment screen while the operating system boots, as shown in Figure 15.14.
The next set of screens will set up the first user account as well as the first administrator of the operating system. This is considered the OOBE pass, or the out‐of‐box experience pass, where the first account is set up for the operating system. You have the choice of logging in with a Microsoft account by choosing Set Up For Personal Use. This is great if you have the account combined with a Microsoft 365 subscription, since the licensing of the Office products are all streamlined when performing a login of this nature. However, if you want to join the computer to an organization's Intune mobile device management (MDM) service, you can select Set Up For Work or School, as shown in Figure 15.15.
FIGURE 15.13 Operating system name
FIGURE 15.14 Just A Moment screen
FIGURE 15.15 Windows account options
If you click Set Up For Personal Use, you will be asked to sign in with a Microsoft account so that your apps, files, and services will sync to the device, as shown in Figure 15.16. Alternately you can click Sign‐in Options and you will be presented with the options to sign in with a security key, create an offline account, or retrieve your username, as shown in Figure 15.17. Creating an offline account is similar to the local accounts created on previous operating system versions.
If you choose Set Up For Work or School, you'll be asked to sign in with a work account or school account that is attached to an Intune service. This will give control of the device over to the organization, where it will be controlled by the MDM policies in Intune. Alternately, you can click Sign‐in Options, as shown in Figure 15.18. If you select Sign‐in Options you will be presented with the options to Sign In With A Security Key or Domain Join Instead, as shown in Figure 15.19.
FIGURE 15.16 Microsoft account for personal use
FIGURE 15.17 Microsoft account options for personal use
FIGURE 15.18 Microsoft account for work or school
FIGURE 15.19 Microsoft account options for work or school
After successfully logging into your Microsoft account, you will be asked to create a PIN to log into the computer with, as shown in Figure 15.20.
FIGURE 15.20 Create a PIN screen
The PIN will replace your password for only this installation. By setting a PIN, you protect your Microsoft account password. If someone shoulder surfs your PIN, they won't have your Microsoft account password—they would only have access to the local computer if it is left unattended.
Type the PIN for the account, then enter it again and click OK, as shown in Figure 15.21.
FIGURE 15.21 Set up a PIN screen
FIGURE 15.22 Restore from prior device or Set up as new device
FIGURE 15.23 Windows telemetry options
FIGURE 15.24 Windows experience customization
FIGURE 15.25 OneDrive confirmation screen
FIGURE 15.26 This Might Take a Few Minutes screen
At this point, you are pretty much done. The next screen you will see is the Windows 11 Desktop. However, throughout the entire setup, you were never asked for your time zone. The time zone is automatically calculated based on your IP address. If it is incorrect, then the following procedure will manually adjust it.
To manually adjust the time, perform the following steps:
Select Adjust Date and Time.
This will open the Date & Time screen, as shown in Figure 15.27.
FIGURE 15.27 Date & Time
Now let's take a look at the in‐place upgrade process for a Windows
10 operating system to Windows 11. It is recommended that you start by
performing all of the current Windows Updates first. Then you
can start the upgrade process itself by inserting the Windows 11 media
or connecting to a network share that contains the media and launching
setup.exe
. This will require answering a User Access
Control (UAC) prompt. When the setup process
starts, it will give you the option to change how Windows Setup
downloads updates, as shown in Figure Figure 15.28, or you can just click
Next at this point. The default is to proceed with the download of
Windows Updates for the installation.
FIGURE 15.28 Install Windows 11 screen
Before proceeding, you must accept the end‐user license agreement (EULA), also known as the license terms, as shown in Figure 15.29.
FIGURE 15.29 Windows 11 end‐user license agreement
The installer will check and download updates necessary for the installation before continuing, as shown in Figure 15.30. This will ensure that you are secure and the upgrade process is smooth and without complication.
FIGURE 15.30 Windows 11 update check
The next screen you will then see is the Ready To Install screen shown in Figure 15.31. This confirms the edition detected and also confirms that personal files and apps will be kept. You have the option to change what is kept during the upgrade process by clicking Change What To Keep.
The upgrade process will begin, and you will see the familiar progress percentage in the upper‐left corner of the screen, as shown in Figure 15.32. The computer will reboot, and Setup will continue with a different progress screen, as shown in Figure 15.33. The computer will reboot several times during this process. As mentioned earlier, device drivers will be detected, and a reboot is required for the drivers to be properly loaded and instantiated.
FIGURE 15.31 Windows 11 Ready To Install screen
FIGURE 15.32 Windows 11 upgrade percentage
FIGURE 15.33 Windows 11 upgrade percentage after reboot
A repair installation is used when you want to reinstall the operating system without losing personal data files, application settings, or applications you've installed. The installation is similar to an upgrade, as described in the previous section, except that Windows 10/11 will detect that it is installed already. You will be presented with the option Keep Personal Files And Apps. The setup process will then reinstall the OS without affecting your personal files, applications, and their corresponding settings. It will, however, reinstall the operating system files, so it is considered a repair installation.
Another option for reinstalling Windows 10/11 is to reset the operating system with the Reset This PC option. This option is used to reset the operating system back to its original state. It provides another way to fix the operating system when it looks to be corrupted. This method should be used as a last resort. It does allow you the choice to keep personal files or completely erase them along with the operating system. On Windows 10, you can reset the operating system with the Reset This PC option by clicking Start, clicking the Settings gear, clicking Update & Security, clicking Recovery, and then choosing the Get Started option under Reset This PC. On Windows 11, you can reset the operating system with the Reset This PC option by clicking Start, clicking the Settings gear, clicking, clicking Recovery, and then choosing the Reset PC option under Reset This PC.
Some vendors will supply a recovery partition that contains the original image for the OS the system came installed with. In the event the operating system is corrupted, the recovery partition can be used to reimage the device back to the factory image. When vendors started to include the recovery partition, the image could be restored by booting the computer into the recovery partition, and a third‐party utility would be used to reimage the device. However, with Windows 10/11 the use of third‐party utilities is no longer needed—Windows has a provision for locally installing the image via WinRE.
Side‐by‐side upgrades require additional hardware, since the original operating system will not be modified during the upgrade. The new hardware will be the hardware that is installed from scratch with Windows 10/11. The user settings and data will then be migrated over from the previous operating system to the new one.
Side‐by‐side upgrades are the best way to upgrade when you need to upgrade a system but the current system is in use. When dealing with one system, side‐by‐side upgrades are great because you don't have to worry about backups of the OS. However, applications must be reinstalled, so this approach does have its disadvantages. When multiple systems require upgrading, a rolling upgrade can be performed. A rolling upgrade is a variation on the side‐by‐side upgrade that allows the decommissioned equipment to be used to upgrade the next user, and this creates a cycle. In other words, as you complete the upgrade of one user, their old system becomes the next user's new system, and the process continues until you get to the last user.
There are a few ways of migrating the user data from the old device to the new device, depending on how your deployment is configured. If the devices are using OneDrive to back up users’ data folders, then users can simply sync their data onto their new devices. This method will work for data files and Microsoft Store apps, but OneDrive will not back and restore Win32 applications.
If you are not using OneDrive, then you can migrate user data with Windows 10/11 using the Microsoft Windows User State Migration Tool (USMT). The USMT allows you to migrate user file settings related to the applications, Desktop configuration, and accounts from one computer to another computer. The migration can be performed via a network connection or a hard drive. USMT is compatible with Windows 10 and Windows 11, but it is currently available only as part of the Windows 11 Assessment and Deployment Kit (ADK). Although the USMT is not part of the CompTIA objectives for the 220‐1102 exam, it is important to know that a tool like the USMT exists, because the migration of user information is a component of an upgrade.
Using the USMT requires an investment of time. If you are performing only a few side‐by‐side upgrades, you can copy the user's profile directory manually. Then, when the new computer is ready, you can copy over the various folders, such as videos, pictures, and documents.
Upgrading operating systems takes time and complex steps to complete, such as running Windows Upgrade, installing updates, upgrading the OS, and installing programs. These steps can take a tremendous amount of time, but if you have fewer than a dozen devices to upgrade, then you could manually upgrade the devices. However, if you have more than a dozen, there are alternatives such as image deployment.
The image deployment process is simple in concept. It starts with a base operating system on a reference computer utilizing real hardware or a virtual machine. The software is then installed on the operating system, an answer file is created using the Windows 11 Assessment and Deployment Kit (ADK), and the Sysprep tool is run on the operating system. The last step is to use a tool like Windows Deployment Services (WDS) to create an image of the operating system. Once the image is captured, it can be deployed to the other computers using the Windows Preinstallation Environment (WinPE).
Windows 10/11 has the ability to upgrade at any time from one edition of the operating system to a higher one (for example, from Windows 10/11 Home to Windows 10/11 Pro). This can easily be accomplished by entering the appropriate activation product key. You can access the activation menu by clicking the Start menu, typing Activation, then clicking the Activation shortcut. You can only upgrade editions; downgrading of retail editions is not supported. However, downgrading of volume license editions, such as Windows 10/11 Enterprise to Windows 10/11 Pro, can be achieved. Downgrading of Windows 10/11 Education to Windows 10/11 Pro can also be done. Downgrading of volume license editions is not formally supported by Microsoft but can be accomplished.
The Windows platform has always had security updates and feature updates. In older versions of Windows such as XP, the features were subtle additions to the operating system. However, with the introduction of Windows 10 entire releases have been dedicated to major feature releases, such as the Creators update and the Anniversary update. Since then, the features have not been so dramatic, but Windows continues to get feature updates with every major/minor update.
Windows versions change twice a year (semi‐annually) and have done so since the introduction of Windows 10. They just haven't been obvious, because they were downloaded as a Windows Update. The original version of Windows 10 is 1507, which was released in July 2015. You may have noticed the pattern. The version is a date code, consisting of the last two digits of the year (15) and the two‐digit month (07). So, it's simple to calculate when the last major update was released and what is currently installed. With the October 2020 release of Windows 10, Microsoft deviated from this naming convention, using H1 for first half and H2 for second half of the year. For example, version 21H1 was released in the first half of 2021. Windows 11 also follows the same date code with its initial release in October 2021; its date code is 21H2.
Versions are updated twice a year (semi‐annually), usually in spring
and fall. They are often referred to as the Windows 10 Spring Update or
Windows 10 Fall Update, respectively. They also have a theme, such as
the Fall Creator Update, which bundles content‐creation tools, or the
Anniversary Update, which bundles new features. As of this writing, the
current Windows 10 version is 21H2 and the Windows 11 version is 21H2.
They were both used in the development of this book. Using
winver.exe
, you can see the actual version of the operating
system, as shown in Figure 15.34.
FIGURE 15.34 Discovering your version of Windows 11
The life cycle of a Windows product has an end date for support, generally a year to two years after the product has been released. Once the end date is met, Microsoft will no longer support that version of Windows. The retirement date of Windows is generally 10 to 12 years, depending on the operating system's popularity. When the product reaches its retirement date, security updates are no longer furnished for the operating system. Therefore, you should be planning OS upgrades on a consistent schedule.
Probably the biggest change to Windows 10/11 is the way updates are delivered to the operating system. The end user no longer has the option to choose if they want to set up Windows Updates; they are mandatory and unavoidable. You can pause updates for up to 7 days on the main Windows Update screen, or pause all updates for up to 35 days, but inevitably they will be installed. To pause Windows Updates, click the Start menu, select the Settings gear, click Updates & Security, choose Advanced Options, and then click Select Date and choose a date up to 35 days later. Figure 15.35 shows the advanced options.
FIGURE 15.35 Advanced options for Windows Updates
Windows 11 has identical controls for pausing Windows Updates. To pause Windows Updates, click the Start menu, select the Settings app, click Windows Update, and then choose 1 to 5 weeks (35 days) in the drop‐down for the Pause Updates section.
Windows 10/11 has three different build branches of code (updates) that will be installed on a regular basis with automatic updates. The branches are as follows:
Table 15.2 details the list of the various Windows 10/11 editions and the options for changing their service update channels.
Windows 10/11 Edition | General Availability Channel | Insider Program | Long‐Term Servicing Channel |
---|---|---|---|
Home | Yes | Yes | No |
Pro | Yes | Yes | No |
Enterprise | Yes | Yes | No |
Enterprise LTSC | No | No | Yes |
Education | Yes | Yes | No |
TABLE 15.2 Windows servicing channel options
You can begin the installation or upgrade process by booting from a number of sources. There are three sources in particular with which you should be familiar: optical disc (CD‐ROM/DVD), USB flash drive, and network boot (PXE). The one most commonly used for an attended installation is the CD‐ROM/DVD boot. (They are identical in functionality.) Because Windows 10/11 only comes on DVD, though, the CD‐ROM option applies to older operating systems, not this one.
You can boot a PC over the network (rather than from a DVD, USB, or hard disk) with Windows Preinstallation Environment (WinPE), which is a stub operating system that creates a Pre‐boot Execution Environment (PXE). A stub operating system is characterized as a scaled‐down version of the primary operating system. A stub operating system usually has basic functionality to perform a basic task, such as the installation of the Windows OS. When Windows 10 is installed with this method, it is considered a remote network installation.
You can install or upgrade Windows with traditional installation media, such as DVD, USB flash drives, and network boot (PXE), but they all tend to be a bit slow. An alternate method of installation and upgrade of Windows is from an external/hot‐swappable drive, such as a USB hard drive. You can even use an external hot‐swappable drive, such as an eSATA. However, the top speed on eSATA is 6 Gbps, compared to the top speed of USB 3.1 at 10 Gbps.
There is one other final option for installing Windows that is often used for reinstallation of the operating system by the original equipment manufacturer (OEM). That option is an internal hidden partition that contains the installation media. The OEM will often include this partition on the hard drive to allow the user to easily reinstall the operating system in the event of problems. In recent years, OEMs have moved away from an internal hard drive partition in favor of the Reset Your PC option in Windows.
With every great plan there are unplanned consequences; we call them considerations. When installing Windows 10/11 or upgrading to Windows 10/11, there are several considerations surrounding the applications you need to use and the hardware Windows 10/11 is being installed on.
When you upgrade an operating system, you have the potential for data loss. Therefore, it is always advisable to back up files and user preferences before starting an upgrade. Depending on the user you are upgrading in your organization, if files are lost it could cause catastrophic loss of sales, payments, and most importantly time. Whenever possible, perform a full‐drive backup, because settings may be in a spot you wouldn't normally back up. If a full‐drive backup cannot be performed, then perhaps replace the drive with another one. Label the original drive removed and recycle it after you confirm that everything is acceptable with the end user.
The applications must be supported on Windows 10/11. You may think that if it runs, then it's supported, but you'd be wrong. Just because a program runs on Windows 10/11 doesn't mean that it was meant to be run on the operating system. It is up to the discretion of the software vendor if they will support you on the latest operating system. This is often discovered in the event you need help from them. So always check if the application is supported on the latest OS before upgrading. In many cases you'll find that software vendors require the latest version of their product purchased or installed for it to be supported on the latest OS. Applications are also backward compatible with older operating systems, because not everyone will be on the latest and greatest operating system.
The hardware that Windows 10/11 is running on is another consideration. Many motherboards and peripherals need third‐party drivers. These drivers must be supported on Windows 10/11 or they may not function correctly. Older network interface cards (NICs) are notorious for not being supported on Windows 10/11. So, always check the hardware vendor's website before upgrading to Windows 10/11.
Although the exam focuses on the Windows operating systems, it tests a great number of concepts that carry over from the Microsoft Disk Operating System (MS‐DOS). MS‐DOS was never meant to be extremely user friendly. Its roots are in Control Program for Microcomputers (CP/M), which, in turn, has its roots in UNIX. Both of these older OSs are command line–based, and so is MS‐DOS. In other words, they all use long strings of commands typed in at the computer keyboard to perform operations. Some people prefer this type of interaction with the computer, including many folks with technical backgrounds. Although Windows has left the full command‐line interface behind, it still contains a bit of DOS, and you get to it through the command prompt.
Although you can't tell by looking at it, the Windows command prompt
is actually a Windows program that is intentionally designed to
have the look and feel of a DOS command line. Because it is, despite its
appearance, a Windows program, the command prompt provides all the
stability and configurability you expect from Windows. You can access a
command prompt by running cmd.exe
.
A number of diagnostic utilities are often run at the command prompt. They can be broken into two categories: networking and operating system. The utilities associated with networking appear in other chapters, but the focus here is on the utilities associated with the operating system.
The OS command‐line tools that you are expected to know for the exam
are cd
,
dir
, md
,
rmdir
,
ipconfig
,
ping
,
hostname
,
netstat
,
nslookup
,
chkdsk
,
net user
,
net use
,
tracert
,
format
,
xcopy
,
copy
,
robocopy
,
gpupdate
,
gpresult
,
shutdown
,
sfc
,
diskpart
,
pathping
,
winver
, and
[
command name
] /?
. They
are discussed in the sections that follow, along with the commands
available with standard privileges, as opposed to those with
administrative privileges.
The power of the command line comes from the level of detail that can be attained with simple commands. There are several commands that allow you to extract a high level of detail. However, the command line is an unforgiving user interface that requires specific commands, whereas with the GUI you just need to know how to point and click, but it lacks the high level of detail. In the following section we will cover basic navigation at the command line.
The dir
command is used to display a list of the files
and folders/subdirectories within a directory. When you use it without
any parameters, dir
will show you not only that information
but also the volume label and serial number, along with the amount of
free space, in bytes, remaining on the disk.
Wild cards can be used with the command to list all files that begin
with a certain letter or end with certain letters. An example is typing
dir *.txt
to list all the
text files in a directory. A plethora of parameters are available that
can be used to customize the results or the display. Table
15.3 lists some of the most common switches available for
dir
.
Switch | Purpose |
---|---|
/a |
Allows you to specify the attributes of files you are seeking (hidden, system, and so on). |
/o |
Allows you to specify a different display order (alphabetic is the default). |
/l |
Returns the results unsorted and in lowercase format. |
/s |
Recursively searches through subdirectories as well as the current directory. |
/t |
Sorts the files according to time order. |
/p |
Displays the results one page/screen at a time. |
/q |
Shows file ownership. |
TABLE 15.3 Common
dir
switches
The cd
, md
, and rmdir
commands
are used to change (or display), make, and remove directories,
respectively. The commands cd
, md
, and
rd
are shorthand versions of the chdir
,
mkdir
, and rmdir
commands, respectively. Table
15.4 lists their usage and switches.
Command | Purpose |
---|---|
cd [path] |
Changes to the specified directory. |
cd /d [drive:][path] |
Changes to the specified directory on the drive. |
cd .. |
Changes to the directory that is up one level. |
cd \ |
Changes to the root directory of the drive. |
md [drive:][path] |
Makes a directory in the specified path. If you don't specify a path, the directory will be created in your current directory. |
rmdir [drive:][path] |
Removes (deletes) the specified directory. |
rmdir /s [drive:][path] |
Removes all directories and files in the specified directory, including the specified directory itself. |
rmdir /q [drive:][path] |
Quiet mode. You won't be asked whether you're sure you
want to delete the specified directory when you use
/s . |
TABLE 15.4
cd
/md
/rd
usage and
switches
So far, you've seen the basics of looking at directories with the
dir
command, changing directories, making directories, and
removing them. However, up to this point we have assumed you are on the
same partition. The cd
command will change directories
within a drive letter, such as the C: drive, but it will not change
drive letters unless you supply the /d
switch. To change
drives without using the dir
command, just enter the drive
letter and append a semicolon to it. For example, if you want to change
to the D: drive, enter d:
at the command prompt. You can then use the cd
command
followed by the drive letter, and if you want to change back, enter
c:
at the command
prompt.
Now that you've learned how to navigate the command prompt to look at files, let's use that knowledge in Exercise 15.1.
Windows is a network operating system, which means that the operating system and its principal user relies on the network for connectivity to information. This is where the command line becomes really useful to the administrator of the PC. The command line will return a large amount of data that is normally not suited for a graphical user interface (GUI). The following is a short list of commands that can help you diagnose network connectivity issues from the command line.
The ipconfig
command is a network administrator's best
friend—it assists in the diagnosis of network problems with the
operating system. The ipconfig
command without any switches
displays basic information, such as the IP address, subnet mask, default
gateway, and DNS suffix. The command ipconfig /all
lists
adapters and each one’s assigned IP address, subnet mask, default
gateway, DNS suffix, DNS server(s), DHCP server, and MAC
address, just to name the most important elements. Viewing these
assignments can help you diagnose the current network status of the
connection.
In addition to verifying the status of a network connection's
assignments, you can release and renew DHCP‐assigned IP addresses. The
/release
switch releases the IP address, and the
/renew
switch renews the lease of an IP address.
The ipconfig
command also allows
you to view the local DNS cache with the /displaydns
switch. You can flush the local DNS cache with the
/flushdns
switch. These switches come in handy when a DNS
entry has changed and you want to immediately flush the cache and verify
any cached entries.
Next to the ipconfig
command, the ping
command is the runner‐up as the network administrator's best friend. The
ping
command allows you to verify network connectivity via
Internet Control Message Protocol (ICMP) packets. A common
troubleshooting step used by network administrators is to ping the
default gateway. If it returns a ping, then the network connectivity
problem is probably beyond that device or your subnet mask is incorrect.
However, after a successful ping, you can verify that your computer has
basic connectivity to it. An example of a successful ping is as
follows:
C:\Users\Sybex>ping 172.16.1.1
Pinging 172.16.1.1 with 32 bytes of data:
Reply from 172.16.1.1: bytes=32 time<1ms TTL=64
Reply from 172.16.1.1: bytes=32 time<1ms TTL=64
Reply from 172.16.1.1: bytes=32 time<1ms TTL=64
Reply from 172.16.1.1: bytes=32 time<1ms TTL=64
Ping statistics for 172.16.1.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
C:\Users\Sybex>
The tracert
command allows the network administrator to
verify the path a network packet travels to its destination. As the
diagnostic packet passes through the internetwork, each router responds
with a response time. This enables you to locate a fault in an
internetwork. Here's an example:
C:\Users\Sybex>tracert 8.8.8.8
Tracing route to google‐public‐dns‐a.google.com [8.8.8.8]
over a maximum of 30 hops:
1 >1 ms <1 ms <1 ms pfsense.Sybex.local [172.16.1.1]
2 13 ms 12 ms 17 ms 96.120.62.213
3 15 ms 15 ms 12 ms te04012.comcast.net [68.86.101.141]
4 13 ms 19 ms 12 ms 162.151.152.153
5 12 ms 13 ms 20 ms 96.108.91.78
6 22 ms 14 ms 20 ms 96.108.91.121
7 21 ms 24 ms 20 ms be‐7016‐cr02.comcast.net [68.86.91.25]
8 20 ms 20 ms 26 ms be‐10130‐pe04.comcast.net [68.86.82.214]
9 20 ms 20 ms 21 ms as040‐2‐c.comcast.net [75.149.229.86]
10 22 ms 21 ms 20 ms 108.170.240.97
11 20 ms 23 ms 21 ms 108.170.226.85
12 20 ms 22 ms 18 ms google‐public‐dns‐a.google.com [8.8.8.8]
Trace complete.
C:\Users\Sybex>
pathping
, another command‐line tool, combines the
benefits of tracert
and ping
. The tool can be
used to diagnose packet loss (or suspected packet loss) to a destination
website. It is invaluable to network administrators to help prove to
their ISP that packet loss is a problem on their network.
The tool will first trace the entire path to a destination IP address or DNS host. Then, each of the hops will be tested with ICMP for packet loss and round‐trip time. It easily identifies router hops that are causing the delay or packet loss. The following is an example of a pathping to my provider's DNS server:
C:\Users\Sybex>pathping 75.75.75.75
Tracing route to cdns01.comcast.net [75.75.75.75]
over a maximum of 30 hops:
0 Wiley.sybex.local [172.16.1.101]
1 pfSense.sybex.local [172.16.1.1]
2 96.120.62.213
3 te‐0‐5‐0‐12‐sur02.pittsburgh.pa.pitt.comcast.net [69.139.166.77]
4 be‐11‐ar01.mckeesport.pa.pitt.comcast.net [68.86.147.109]
5 be‐7016‐cr02.ashburn.va.ibone.comcast.net [68.86.91.25]
6 ae‐4‐ar01.capitolhghts.md.bad.comcast.net [68.86.90.58]
7 ur13‐d.manassascc.va.bad.comcast.net [68.85.61.242]
8 dns‐sw01.manassascc.va.bad.comcast.net [69.139.214.162]
9 cdns01.comcast.net [75.75.75.75]
Computing statistics for 225 seconds…
Source to Here This Node/Link
Hop RTT Lost/Sent = Pct Lost/Sent = Pct Address
0 Wiley.sybex.local [172.16.1.101]
0/ 100 = 0% |
1 0ms 0/ 100 = 0% 0/ 100 = 0% pfSense.sybex.local [172.16.1.1]
0/ 100 = 0% |
2 14ms 0/ 100 = 0% 0/ 100 = 0% 96.120.62.213
0/ 100 = 0% |
3 15ms 0/ 100 = 0% 0/ 100 = 0% te‐0‐5‐0‐12‐sur02.pittsburgh.pa.pitt.comcast.net [69.139.166.77]
0/ 100 = 0% |
4 15ms 0/ 100 = 0% 0/ 100 = 0% be‐11‐ar01.mckeesport.pa.pitt.comcast.net [68.86.147.109]
0/ 100 = 0% |
5 23ms 0/ 100 = 0% 0/ 100 = 0% be‐7016‐cr02.ashburn.va.ibone.comcast.net [68.86.91.25]
0/ 100 = 0% |
6 23ms 0/ 100 = 0% 0/ 100 = 0% ae‐4‐ar01.capitolhghts.md.bad.comcast.net [68.86.90.58]
0/ 100 = 0% |
7 25ms 0/ 100 = 0% 0/ 100 = 0% ur13‐d.manassascc.va.bad.comcast.net [68.85.61.242]
0/ 100 = 0% |
8 24ms 0/ 100 = 0% 0/ 100 = 0% dns‐sw01.manassascc.va.bad.comcast.net [69.139.214.162]
0/ 100 = 0% |
9 23ms 0/ 100 = 0% 0/ 100 = 0% cdns01.comcast.net [75.75.75.75]
Trace complete.
C:\Users\Sybex>
The netstat
command allows you to view listening and
established network connections for the operating system. Several
switches can be used with the netstat
command. One of the
most useful is the ‐b
switch, which displays the name of
the application and its current established connections. Adding the
‐a
switch displays all the listening connections in
addition to the established connections. A basic example follows:
C:\Users\Sybex>netstat
Active Connections
Proto Local Address Foreign Address State
TCP 127.0.0.1:49750 view-localhost:50912 ESTABLISHED
TCP 127.0.0.1:50912 view-localhost:49751 ESTABLISHED
TCP 172.16.1.181:49208 104.20.60.241:https ESTABLISHED
TCP 172.16.1.181:49599 172.67.181.149:https ESTABLISHED
TCP 172.16.1.181:49600 52.167.17.97:https TIME_WAIT
TCP 172.16.1.181:49602 20.50.80.210:https ESTABLISHED
TCP 172.16.1.181:49603 a104-75-163-105:http TIME_WAIT
TCP 172.16.1.181:56759 151.101.1.140:https ESTABLISHED
TCP 172.16.1.181:64151 iad23s96-in-f10:https CLOSE_WAIT
TCP 172.16.1.181:64152 iad66s01-in-f13:https CLOSE_WAIT
TCP 172.16.1.181:64154 iad23s96-in-f10:https CLOSE_WAIT
C:\Users\Sybex>
DNS is one of the most important network services that an operating
system and user relies on for resolution of www.sybex.com
to an IP
address. Without DNS, we just couldn't remember the millions of IP
addresses; it would be like trying to remember the phone number of every
person you've ever met or are going to meet.
When DNS problems arise, the nslookup
command allows you
to verify that DNS is working correctly and that the correct results are
being returned. The simplest way to use DNS is to use an inline query,
such as nslookup
www.sybex.com
.
This will return the IP address associated with the fully qualified
domain name (FQDN) of www.sybex.com
. The
nslookup
command can also be used in the interactive mode
by typing nslookup
and
pressing Enter. This mode allows you to query more than the associated
IP address, depending on the type of DNS record you are trying to
diagnose. By default, the record looked up with the
nslookup
command is the A or CNAME DNS records. These
records are the most commonly looked up DNS records for diagnosing
connectivity issues. By specifying the ‐type
argument, you
can change the default record queried. The following is an example of
retrieving the IP address for the FQDN of www.sybex.com
, as well as
the use of the ‐type
argument:
C:\Users\Sybex>nslookup www.sybex.com
Server: pfsense.wiley.local
Address: 172.16.1.1
Non-authoritative answer:
Name: www.sybex.com
Address: 63.97.118.67
C:\Users\Sybex>nslookup -type=mx sybex.com
Server: pfSense.wiley.local
Address: 172.16.1.1
Non-authoritative answer:
sybex.com MX preference = 20, mail exchanger = cluster1a.us.messagelabs.com
sybex.com MX preference = 10, mail exchanger = cluster1.us.messagelabs.com
C:\Users\Sybex>
The hostname
command allows the administrator to keep
their sanity. The command returns the hostname of the computer that you
have the command prompt open on. It can get pretty confusing for the
administrator when they jump from one computer to another and remain in
the command line. So by typing the command
hostname
, you can
positively identify the system you are about to execute a command on.
The following is an example of the command's use:
C:\Users\Sybex>hostname
Wiley-023432
C:\Users\Sybex>
There are several other command‐line tools just as useful as the connectivity tools we've discussed, but they are used for various other purposes. The following tools are part of the objectives for the 220‐1102 exam, and they are commonly used by Windows administrators.
Active Directory refreshes local and Active Directory–based policies every 90 minutes in what is called a background refresh cycle. When the background refresh happens, policies are reapplied, forcing the settings that the administrator has configured in the Group Policy settings.
The gpupdate
command is used to update Group Policy
settings. It refreshes, or changes, both local and Active
Directory–based policies and replaces some of the functionality that
previously existed with the secedit
command.
The gpupdate
command can force the refresh cycle
immediately with the /force
switch. In addition, you can
target the computer or the user, which is particularly useful when
trying to diagnose a problem with Group Policy Objects (GPOs). The
following is an example of forcing a refresh for the computer GPO
settings:
C:\Users\bohack>gpupdate /force /target:computer
Updating policy…
Computer Policy update has completed successfully.
C:\Users\bohack>
The gpresult
command is used to show the Resultant Set
of Policy (RSoP) report/values for a remote user and computer. Bear in
mind that configuration settings occur at any number of places: they are
set for a computer, a user, a local workstation, the domain, and so on.
Often one of the big unknowns is which set of configuration settings
takes precedence and which is overridden. With gpresult
, it
is possible to ascertain which settings apply.
A number of switches can be used in conjunction with the
gpresult
command. The most useful switches are the
/r
and /z
switches. The /r
switch
allows you to see the RSoP summary of GPOs applied. This allows you to
quickly verify if a policy is being applied. You can then use the
/z
switch to turn on super‐verbosity, which allows the
output to display the exact settings being applied.
The net
command can be used with several different
subcommands. Most of the subcommands have been deprecated with the last
few releases of Windows.
net use
The
net use
subcommand is still widely used by administrators
to map drive letters to network shares. The syntax for mapping a drive
of Z: to the network location of \\server\share
is as
follows:
net use Z: \\server\share
net user The
net user
is another subcommand widely used by
administrators. This command allows an administrator to list all the
local accounts on a Windows installation by entering
net user
. It can also be
used to list local accounts or domain accounts by supplying some
arguments to the command. If you wanted to create a local account of
usertwo
with a password of Passw0rd, you would use
the following syntax:
net user usertwo Passw0rd /add
There are several commands for the administration of drives and folders. In this section, we will cover the common command‐line tools that you will use in day‐to‐day administration of Windows. All of these tools can be accessed in the GUI as well, but the advantage to using the command line is avoiding clicks and potential mistakes.
The format
command is used to wipe data off disks and
prepare them for new use. Before a hard disk can be formatted, it must
have partitions created on it. (Partitioning was done in the DOS days
with the fdisk
command, but that command does not exist in
current versions of Windows, having been
replaced with diskpart
.) The syntax for format
is as follows:
format [volume] [switches]
The volume
parameter describes the
drive letter (for example, D:
), mount point, or volume
name. Table 15.5 lists some common
format
switches.
Switch | Purpose |
---|---|
/fs: [filesystem ] |
Specifies the type of filesystem to use (FAT, FAT32, or NTFS) |
/v: [label ] |
Specifies the new volume label |
/q |
Executes a quick format |
TABLE 15.5
format
switches
There are other options as well—to specify allocation sizes, the number of sectors per track, and the number of tracks per disk size. However, we don't recommend that you use these unless you have a very specific need. The defaults are just fine.
Thus, if you wanted to format your D: drive as NTFS, with a name of HDD2, you would type the following:
format D: /fs:ntfs /v:HDD2
The copy
command does what it says: it makes a copy of a
file in a second location. (To copy a file and then remove it from its
original location, use the move
command.) Here's the syntax
for copy
:
copy [filename] [destination]
It's pretty straightforward. There are several switches for
copy
, but in practice they are rarely used. The three most
commonly used switches are /a
, which indicates an ASCII
text file; /v
, which verifies that the files are written
correctly after the copy; and /y
, which suppresses the
prompt asking whether you're sure that you want to overwrite files if
they exist in the destination directory.
If you are comfortable with the copy
command, learning
xcopy
shouldn't pose too many problems. It's basically an
extension of copy
with one notable exception—it's designed
to copy folders as well as files. The syntax is as follows:
xcopy [source] [destination][switches]
There are 26 xcopy
switches. Some commonly used ones are
listed in Table 15.6.
Switch | Purpose |
---|---|
/a |
Copies only files that have the Archive attribute set and does not clear the attribute (useful for making a quick backup of files while not disrupting a normal backup routine). |
/e |
Copies directories and subdirectories, including empty directories. |
/f |
Displays full source and destination filenames when copying. |
/g |
Allows copying of encrypted files to a destination that does not support encryption. |
/h |
Copies hidden and system files as well. |
/k |
Copies attributes (by default, xcopy
resets the Read‐Only attribute). |
/o |
Copies file ownership and ACL information (NTFS permissions). |
/r |
Overwrites read‐only files. |
/s |
Copies directories and subdirectories but not empty directories. |
/u |
Copies only files that already exist in the destination. |
/v |
Verifies the size of each new file. |
TABLE 15.6
xcopy
switches
Perhaps the most important switch is /o
. If you use
xcopy
to copy files from one location to another, the
filesystem creates new versions of the files in the new location without
changing the old files. In NTFS, when a new file is created, it inherits
permissions from its new parent directory. This could cause problems if
you copy files. (Users who didn't have access to the file before might
have access now.) If you want to retain the original permissions, use
xcopy /o
.
The robocopy.exe
(Robust File Copy) utility is included
with recent versions of Windows and has the big advantage of being able
to accept a plethora of specifications and keep NTFS permissions intact
in its operations. The /mir
switch, for example, can be
used to mirror a complete directory tree.
An excellent resource on how to use robocopy
can be
found at https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy
.
The robocopy
utility is the Swiss army knife of file and
folder copy utilities. It can copy files and their attributes to include
NTFS attributes.
The diskpart.exe
utility shows the partitions and lets
you manage them on the computer's hard drives. You can perform the same
functions in the diskpart
utility as you can perform in the
GUI, which is discussed later in this chapter. Because of the enormous
power that diskpart
holds, membership in the Administrators
local group (or equivalent) is required to run
diskpart
.
You can use the Windows chkdsk.exe
utility to create and
display status reports for the hard disk. chkdsk
can also
correct filesystem problems (such as cross‐linked files) and scan for
and attempt to repair disk errors. You can manually start
chkdsk
by right‐clicking the problem disk and selecting
Properties. This will bring up the Properties dialog box for that disk,
which shows the current status of the selected disk drive.
By clicking the Tools tab at the top of the dialog box and then
clicking the Check button in the Error‐checking section, you can start
chkdsk
. Exercise 15.2 walks you through
starting chkdsk
in the GUI, and Exercise
15.3 does the same from the command line.
There are lots of other command‐line tools, so many that we could dedicate a book to all of them. However, for the 220‐1102 exam, there are specific command‐line tools that CompTIA wants you to know. In the following, we will cover all the other miscellaneous command‐line tools that don't fit into one of the previously mentioned categories.
The shutdown.exe
utility can be used to schedule a
shutdown (complete or a restart) locally or remotely. You can even have
the computer enter into a hibernation power state. In addition,
shutdowns of the computer can be logged with a variety of reasons, from
unplanned to application maintenance. A message can also be specified
and announced to users for the shutdown. The syntax of the command is as
follows:
shutdown [/i | /l | /s | /sg | /r | /g | /a | /p | /h | /e | /o] [/hybrid] [/soft] [/fw] [/f] [/m \\computer][/t xxx][/d [p|u:]xx:yy [/c "comment"]]
There are a lot of different switches, as you can see from the
previous usage syntax. Table 15.7 lists the most important
switches for the shutdown
command.
Switch | Purpose |
---|---|
/s |
Shut down the computer. |
/sg |
Shut down the computer. On the next boot, restart any registered applications. |
/r |
Do a full shutdown and restart the computer. |
/g |
Do a full shutdown and restart the computer. After the system is rebooted, restart any registered applications. |
/a |
Abort a system shutdown. |
/h |
Hibernate the local computer. |
/o |
Go to the advanced boot options menu and restart the
computer. Must be used with the /r option. |
/m \\computer |
Specify the target computer. |
/t xxx |
Set the timeout period before shutdown to
xxx seconds. The valid range is 0–315360000 (10
years), with a default of 30. If the timeout period is greater than 0,
the /f parameter is implied. |
TABLE 15.7
shutdown
switches
The System File Checker (sfc.exe
) is a command
line–based utility that checks and verifies the versions of system files
on your computer. If system files are corrupted, sfc
will
replace the corrupted files with correct versions.
The syntax for the sfc
command is as follows:
sfc [switch]
Table
15.8 lists the switches available for sfc
.
Switch | Purpose |
---|---|
/scanfile |
Scans a file that you specify and fixes problems if they are found. |
/scannow |
Immediately scans all protected system files and replaces corrupted files with cached copies. |
/verifyonly |
Scans protected system files and does not make any repairs or changes. |
/verifyfile |
Identifies the integrity of the file specified and makes any repairs or changes. |
/offbootdir |
Repairs an offline boot directory. |
/offwindir |
Repairs an offline Windows directory. |
TABLE 15.8
sfc
switches
To run sfc
, you must be logged in as an administrator or
have administrative privileges. If the System File Checker discovers a
corrupted system file, it will automatically overwrite the file by using
a copy held in another directory. The most recent Windows versions store
the files in a large number of discrete folders beneath
C:\WINDOWS\WINSXS
(where they are protected by the system
and only TrustedInstaller is allowed direct access to them—the cache is
not rebuildable). TrustedInstaller is a service in Windows 10/11 that
enables the installation, removal, and modification of system
components.
If you attempt to run sfc
from a standard command
prompt, you will be told that you must be an administrator running a
console session in order to continue. Rather than opening a standard
command prompt, on Windows 10/11 click the Start menu, type
cmd
, and then press
Ctrl+Shift+Enter. On Windows 8.1 and below, choose Start ➢ All Programs
➢ Accessories, and then right‐click Command Prompt and select Run As
Administrator. The UAC will prompt you to continue, and then you can run
sfc
without a problem.
winver.exe
is not a command‐line tool but a useful tool
to glean information about the operating system. The command will
display a GUI dialog box. The command's sibling at the command line is
ver.exe
. This command has its roots all the way back to
DOS, and it's still active in the latest version of Windows 10. Using
the command will return the version of Windows, as shown in the
following:
C:\Users\NetworkedMinds>ver.exe
Microsoft Windows [Version 10.0.19043.1237]
C:\Users\NetworkedMinds>
The help
command does what it says: it gives you help.
If you just type help
and
press Enter, your computer gives you a list of system commands that you
can type. To get more information, type the name of a command that you
want to learn about after typing
help
. For example, if you
type help rd
and press
Enter, you will get information about the rd
command.
You can get the same help information by typing /?
after
the command.
By default, any user can open a command prompt and begin typing the
names of command‐line commands. Certain commands, however, can be
dangerous when run and, as a safety precaution, require administrative
privileges. The sfc
command was mentioned earlier, for
example, as requiring administrative privileges.
With Windows 7, rather than opening a standard command prompt, choose
Start ➢ All Programs ➢ Accessories, right‐click Command Prompt, and then
choose Run As Administrator. The UAC will prompt you to continue, and
then you can run sfc
without a problem. With Windows 8,
there are two choices on the Start menu; the latter allows you to open
the command prompt with administrative privileges. Windows 10/11 behaves
similarly to Windows 7; you can right‐click the Command Prompt search
result and choose Run As Administrator, as shown in Figure
15.36.
FIGURE 15.36 Opening a command prompt with admin privileges in Windows 10
In most cases, if you try to run a utility that requires administrative privileges and you are not currently in a console session that has them, an error message will notify you of this.
CompTIA expects you to know a number of topics related to networking and Windows. This section covers the various scenarios in which you will deploy Windows in a network. First, we'll cover small office, home office (SOHO) deployments, and then we'll scale it out to an enterprise network. Many of these options are identical in Windows 8/8.1, with the exception of a few new features that we will highlight as specific to Windows 10/11.
There are several different networking models that you can use to facilitate Windows authentication. There is no single way to implement a network model; it all depends on the needs of the organization to share the resources. Each model has its advantages and disadvantages. This section covers the three most popular methods.
The HomeGroup feature is available in Windows 10 version 1709 and prior, as well as Windows 8/8.1 and Windows 7. The HomeGroup feature allows for the sharing of files and printers with a single password. You can choose which types of resources are shared, such as pictures, documents, music, videos, and printers. However, the HomeGroup feature has been removed as of Windows 10 version 1803. We will discuss alternatives to HomeGroups in the following sections.
The workgroup networking model has existed in the Windows operating system since it was first introduced. Windows can function as both a client and a server simultaneously. When an operating system can function as both a client and a server, it is considered peer‐to‐peer networking. Clients can join the network and leave the network at any time.
The core of the Windows operating system is essentially the same for both a server and workstation. There are a few differences between the two—for instance, servers normally give priority to background services like file and printer sharing versus the desktop. The second and most limiting is the restriction of connecting 20 simultaneous clients to a single workstation. After 20 simultaneous connections are made, all others clients will return an “Access Denied” error—that is, until one of the 20 connections is closed out and the session is terminated. This takes some time on the workstation side, so if you have more than 20 clients, you should plan to deploy an actual server system running software such as Windows Server 2019 or Windows Server 2022.
Workgroups are normally used in SOHO environments or in situations that do not require the infrastructure of a dedicated server for authentication. They should be kept to a maximum of 20 clients, with the expectation that each client will maintain its own resources (files and printer sharing). Many small offices use this networking model and never need anything more.
A typical situation where a workgroup is effective is when a printer needs to be shared from a single computer. The disadvantage is that the computer must be on in order for the clients to use the printer. Another disadvantage with workgroups is user authentication, which we will discuss in detail later in this chapter.
When you install Windows 10/11, by default it is joined to the workgroup named Workgroup. However, there may be instances in which you want to join another workgroup. To join another workgroup, perform the following steps:
Domain functionality has existed since Microsoft Windows 3.51 (mid‐1990s). Unlike the workgroup networking model, the domain networking model requires that clients be joined to a domain. Joining a domain creates a trust between the client (Windows client, for example) and the authentication server (Windows Server 2019 running Active Directory, for example). Joining a client to a domain allows users with an account in the domain to log into the client. To join a domain in Windows 10/11, perform the following steps:
FIGURE 15.37 Windows 11 Settings app for joining a domain
An alternate way of joining a domain is through the System Properties dialog box. To do so:
The next dialog box will prompt you to enter your credentials on the domain that allows the joining of the operating system. You will then need to reboot the operating system for the changes to take effect. This method of joining an operating system to a domain has been supported since Windows XP.
FIGURE 15.38 Windows 11 domain credentials prompt
FIGURE 15.39 Windows domain joining
Domains also allow for files and printers to be secured with domain credentials. The key takeaway in the benefit to domains is centralized authentication for users and computers. It is important to note that the term domain describes both the networking model and the friendly name of the security domain of the network.
Microsoft Active Directory is the main authentication technology used with domain controllers in a domain networking model. Active Directory contains objects such as users, computers, and printers, as well as many other types of objects, called Group Policy Objects (GPOs). Active Directory allows for the grouping of these objects logically so that they can be controlled with policies through Group Policy. Active Directory domains can scale to any number of joined clients and user credentials. Many large organizations use Active Directory for centralized user authentication, because it is very scalable.
When you choose to use a workgroup networking model, all authentication is local to the operating system. Windows operating systems contain a local authentication mechanism called the Security Account Manager (SAM). The SAM can be considered a local database of users and groups. All users locally authenticating to the workstation authenticate against this internal database of usernames and passwords and are granted a local access token, as shown in Figure 15.40. The local access token allows the user to access local resources secured with the user's identity.
FIGURE 15.40 Windows local authentication
When clients are joined to a domain, a user with credentials on the domain can log into the workstation. When this happens, they are authenticating against an Active Directory domain controller. An Active Directory domain controller retains information about all access rights for all users and groups in the network. When a user logs into the system, AD issues the user a globally unique identifier (GUID), also known as an access token. Applications that support Active Directory can use this access token to provide access control. When a client is joined to a domain, users can still log into the local operating system using the SAM, as if they have a local account (see Figure 15.41). However, local logins are normally restricted to administrators once a client is joined to the domain.
FIGURE 15.41 Windows domain authentication
Active Directory simplifies the sign‐on process for users and lowers the support requirements for administrators. Access can be established through groups, and it can be enforced through group memberships. Active Directory can be implemented using a Windows Server (such as Windows Server 2019 or Windows Server 2022) computer. All users will then log into the Windows domain using their centrally created AD accounts.
One of the big problems that larger systems must deal with is the need for users to access multiple systems or applications. This may require a user to remember multiple accounts and passwords. The purpose of a single sign‐on (SSO) is to give users access to all the applications and systems that they need when they log in.
SSO operates on the principle that the resource trusts the authentication server. When a user logs in initially, they will authenticate against the authentication server for their organization. When the user then visits the resource, which is normally a cloud‐based resource, it will prompt the authentication server to provide a claim on behalf of the user, as shown in Figure 15.42.
FIGURE 15.42 Single sign‐on
The claim normally contains basic information about the user, such as first and last name, email address, or any other attribute. At no time is the user's password sent, because they authenticated once already. Although we've oversimplified SSO in this example, it really is this simple, without the layers of encryption and complicated trust rules. As we adopt more and more cloud resources, it is becoming the number one way to provide authentication for our users because we never transmit the actual username and password.
A key element to a successful network is the connection that connects the computer to the actual network. There are a number of different ways to connect to the network that we will cover in the following section. The key takeaway is that the network system will function identically, regardless of the connection. For example, if you have a computer joined to an organization's domain, the login process will function identically with a wireless versus a wired connection.
The type of connection you choose is based on your convenience and the requirements of the connection. As an example, wireless is extremely convenient, but it requires you to be within distance of an access point (AP), and the further you are away from the AP the slower the connection. If your requirement is consistent high bandwidth for a Voiceover IP (VoIP) call, then a wired connection is your best choice and wireless would be problematic.
Wired A wired network is most common in organizations with desktop computers. Unfortunately, a wired connection means that there is no mobility for the connection; you are literally wired to a desk.
Wired connections are the most reliable and arguably easiest to diagnose when there are problems. The network link light gives you a visual indicator that you have a network connection. Windows 10/11 will also place a visual notification of a computer with a cable in the notification area when a wired connection is detected.
Wireless A wireless connection is found in networks where mobility is required. Wireless is often used in small office/home office (SOHO) networks settings. It can also be found in small and large organizations that require workers to move around, such as a factory setting, sales workforce, or medical setting, just to name a few.
Connecting to wireless networks is not as straightforward as making a wired connection. There is rarely a visual identifier that you have a wireless connection, such as the link light with a wired connection. Windows 10/11 will display the notification tray icon as a radio wave. If the radio wave is grayed out with an asterisk at the upper left, then there are wireless connections detected and you are not connected to any of them. If you click the wireless icon, all of the wireless networks available will be displayed, along with their security status, as shown in Figure 15.43.
FIGURE 15.43 Wireless connectivity
You can then choose a wireless network and select Connect. If the wireless network requires additional security, such as a preshared key (PSK) or a corporate login to a captive portal, the operating system will direct you. By default, wireless networks that you connect to will automatically reconnect when you are in range. When the wireless connection is established, the notification tray icon will appear as a white radio wave. As you move further away from the AP, the wireless indicator will act as a signal strength meter.
Virtual Private Network (VPN) In recent years, virtual private networks (VPNs) have become the rage of network privacy for browsing. These applications provide encryption to an anonymous server from which you can browse the Internet. However, the 220‐1102 exam is based on traditional VPN technology that provides a secure connection between the two endpoints in an organization.
The VPN connection is an overlay network on top of an established network connection called the underlay network. So, you will need an established Internet connection before a VPN can connect. The VPN connection will provide two distinct features to the end user. The first feature it provides is an entry point to the organization's private network. The second feature it provides is end‐to‐end encryption for everything transmitted over the connection between the client and the organization's private network.
Establishing a VPN connection will require information from the organization's VPN appliance or server. You will need the VPN protocol, server address, and the sign‐in info. You can create a VPN connection by navigating the Start button ➢ Settings gear ➢ Network & Internet ➢ VPN; from here you can add a VPN connection, as shown in Figure 15.44.
Wireless Wide Area Network (WWAN) A wireless wide area network (WWAN) connection is a connection that is created with a cellular data provider, such as Sprint, Verizon, or AT&T, just to name a few. The mobile device will require a special card called a WWAN adapter, or most mobile devices will have a built‐in card. You will then need to register the connection with the cellular provider to activate the connection; this process usually requires a monthly billing for data and the connection is often metered.
A metered connection is a connection in which you pay for a specific amount of data to be downloaded. Once the amount of data is reached, you generally pay for overages per gigabyte. Many features in Windows, such as Windows Updates, will work differently depending on whether the connection is a metered connection. Windows Updates will not download over a metered connection, so the features don't use all your precious data over the metered connection. The cellular option of connectivity is not the only connection that can be classified as a metered connection; each of the methods of connectivity in this section can be set as a metered connection. It is just more common to find cellular connection as a metered connection.
FIGURE 15.44 VPN connectivity
Once the connection is registered, you can connect via the wireless notification tray icon and select the cellular network you wish to connect to. You can also connect to the cellular network by navigating the Start button ➢ Settings gear ➢ Network & Internet ➢ Cellular, as shown in Figure 15.45. If your device does not support a WWAN device, then the Cellular section will not appear on the Network & Internet screen. After selecting Cellular, you can then configure roaming options, as well as the preference of cellular over Wi‐Fi, and you can tell Windows to treat the connection as a metered connection.
Proxy Settings In addition to establishing a connection, you may need to set a proxy server depending on your organization's policies. In many organizations, the web browser is not allowed to directly request web pages from the destination web server. An intermediary called a proxy server is used to request the web page on behalf of the user. The use of a proxy server allows for caching of frequently accessed web pages, as well as the ability to filter content. The proxy is primarily for web‐based traffic, such as browsing with the Edge browser or Internet Explorer. However, other applications can also elect to use the proxy server, depending on their traffic type.
FIGURE 15.45 Windows Cellular
To configure the proxy settings for Microsoft Edge and Internet Explorer, click the Start button ➢ Settings gear ➢ Network & Internet ➢ Proxy. From the proxy screen you can configure the operating system to automatically use a setup script (JavaScript) by clicking the switch for Use Setup Script and specifying the script address, as shown in Figure 15.46.
You can also specify a manual proxy setup, which is a common configuration task. You will simply click the switch Use A Proxy Server and then enter the address and port. Specific websites often require direct access and will not work with a proxy server. You can enter exceptions into the lower section and separate servers with a semicolon. You can also use wild cards, if you want to exclude an entire namespace.
FIGURE 15.46 Windows Proxy settings
Now that you understand networking models, authentication, and how to connect to the network, let's focus on the resources that users will access. This section covers the most common types of resources.
A network share is a type of resource sharing that allows
access to file and folder resources over a network from a file server.
File Explorer is used to access network shares, but you can also use
command‐line commands to access the resources, as you will see in the
following sections. As the name implies, network shares are those that
exist on the network; however, they can be mapped to appear as if they
are local. The net use
command can be used to establish
network connections at a command prompt, for example. If you want to
connect to a shared network drive and make it your M: drive, the syntax
is net use M: \\
server\share
.
The
\\server\share
portion of the command is called the Universal Naming Convention
(UNC) path. The UNC path is a standard way to describe the server
and fileshare to the Windows operating system.
In addition to using the command line, you can
use the GUI to map a network share. After browsing to the server by
typing
\\servername
in the Windows File Explorer address bar, right‐click the fileshare and
select Map Network Drive. If you are using Windows 11, you will need to
click Show More Options first. You will then be prompted with some
options for mapping the network share, as shown in Figure
15.47.
FIGURE 15.47 Mapping a network drive
Another common type of network resource is printing. Although we
often need to access files and folders, at some point we will probably
need to print. You can use the same command to connect to a shared
printer; the syntax of net use lpt1:
\\server\printername
will map a printer
to the LPT1 device. As in the prior example, you can make the printer
act as if it is locally connected to the operating system. You can also
use the GUI method of connecting a printer by right‐clicking the printer
after browsing to the server and then selecting Connect, as shown in Figure
15.48.
FIGURE 15.48 Connecting to printers
Administrative shares are automatically created for
administrative purposes on Windows. These shares can differ slightly
based on which operating system is running, but they end with a dollar
sign ($
) to make them hidden. There is one for each volume
on a hard drive (C$
, D$
, and so forth), as
well as admin$
(the root folder—usually
C:\WINDOWS
), and print$
(where the print
drivers are located). These are created for use by administrators and
usually require administrator privileges to access. It's important to
note that they are hidden shares and that any shared folder created with
a trailing $
will be hidden from the users as well. Unless
you know they exist, they will not be visible.
Although you can access shares on file servers and printers that are shared, there are many other shared resources that can be shared and accessed. Files and printers are just the common resources that make up the file and print sharing service that Windows has had built in, arguably since the inception of MS‐DOS. Application sharing is one example of a shared resource. Scanners and faxes are other examples of hardware‐based shared resources. These examples only scratch the surface; the cloud is full of applications and shared resources. All these resources can be authenticated with domain‐based authentication or separate locally based credentials inside the application.
Microsoft Windows has come with a preinstalled firewall since Windows XP Service Pack 2. The addition of the Windows Firewall feature was welcomed at a time in history when the Internet could be described as the Wild West. Today Windows Defender Firewall is an integral part of Windows. Surprisingly the interface has not changed all that much from its debut with Windows XP. The original firewall was not turned on by default, but with the introduction of Windows Vista the firewall was on by default in the inbound direction. The firewall is never configured in an outbound direction, but if needed it has the ability.
Windows Defender Firewall is scalable as a host‐based firewall, because an average user can configure the firewall. However, if more complicated firewall rules need to be composed, the advanced interface allows for an administrator to intervene. The configuration of the firewall is really straightforward; you can access the basic firewall controls by clicking Start ➢ Windows System ➢ Control Panel ➢ Windows Defender Firewall, or you can click Start and start typing the word firewall until it appears in the search results. Launching it will display the dialog box shown in Figure 15.49. The basic Windows Defender Firewall dialog box allows you to perform basic firewall tasks, such as turning off the firewall, changing user notification, restoring defaults, and most importantly, allowing an app or feature through the firewall.
FIGURE 15.49 Windows Defender Firewall
There are a number of applications and services that are preconfigured in the firewall. For example, when you share a folder, the ports associated with filesharing are automatically enabled. Another mechanism exists to allow the firewall to easily configure itself; when a program is launched that listens to a port, an Allow Access or Cancel notification is sent to the user, as shown in Figure 15.50. If the user selects Allow Access, a rule is added to the firewall for the specific application.
FIGURE 15.50 Windows Defender Firewall notifications
Along with the automated mechanism in which firewall rules can be added, you can specify which network profile they are active in. Network profiles are identified by the MAC address of the default gateway. The firewall will learn the internal network of your home by making a note of the MAC address of your router. If the MAC address has not been seen before by the network operating system, then the firewall will ask if the network is private or public. This way the firewall can behave differently in your home than if you were in an airport or another public setting. There are three different network profiles that firewall rules can be active for: public, private, and domain. You can control if your laptop is public or private, but if a domain controller exists and the laptop is joined to a domain, then the domain profile becomes active.
The individual rules can be examined or individually configured with the Windows Defender Firewall with Advanced Security MMC, as shown in Figure 15.51. You can see each of the rules along with its effective profile of public, private, domain, or all.
Adding rules manually allows for maximum granularity but comes with the price of complexity. You can add a rule based on a program, port, predefined rule, or something totally custom. For example, if you wanted to only allow an incoming port of 2233 via TCP to a specific application awaiting its request for a specific network profile, this interface will allow you to do so. If you are the administrator of a domain, you can also create rules inside a Group Policy Object (GPO) and deploy the rules out to a large group of computers.
FIGURE 15.51 Windows Defender Firewall with Advanced Security
If you have a router or a server that supports the Dynamic Host Configuration Protocol (DHCP), the client will automatically configure itself with an IP address, subnet mask, default gateway, and the appropriate Domain Name System (DNS) servers. This is the default behavior for all devices, because it is trouble‐free from a user's point of view. For example, if you turn on your laptop at home, your router will serve all the information necessary to get on the home network and the Internet. If you pick up your device and go to work, this process will also happen at your workplace to allow you to connect to servers or the Internet.
If you need to statically configure your network settings, that will require some planning and manual configuration. The first thing you will need is an IP address that is not used by another computer in the network. The subnet mask will also need to match the network you will configure the computer in. If you want to communicate outside the immediate network, you will need a default gateway, which is your router's IP address. A DNS address is also required if you want to translate simple domain names to IP addresses. There are two ways to configure the static IP address: by using the new Settings app and by using the legacy Control Panel applet. You should be familiar with both ways, as the legacy Control Panel applet offers more features, such as alternate IP address configuration.
FIGURE 15.52 Settings App network configuration
FIGURE 15.53 Control Panel network configuration
Windows 11 and prior versions of Windows also allow for the use of an alternate IP address—that is, an address configured for the system to use in the event the first choice is not available. In order for an alternate configuration to be set, the first choice has to be dynamic; the tab becomes visible only when the General configuration is set to Obtain An IP Address Automatically (as shown in Figure 15.53), and the alternate is used only if the primary address cannot be found/used, such as when the DHCP server is down. The Alternate IP address configuration is only available with the Control Panel applet.
169.254.
x.x
). Selecting User Configured requires
you to enter a static IP address to be used in the IP Address field. The
entry entered must be valid for your network in order for it to be
usable.
FIGURE 15.54 Alternate Configuration tab
In this chapter, you learned the various options for the installation and upgrade of Windows. Both the installation and the upgrade process were covered in great detail, so you could see what happens in each step. We also covered the various ways to install Windows, deploy images, and recover Windows when things go wrong.
In addition, you learned some of the command‐line tools that can be used to administer Windows. We covered basic Windows commands to view, create, and navigate files and folders. We then focused on commands that help administer and diagnose the network. We also covered commands that help you manage disks and filesystems. We then explored the most important aspect of the command line: getting help.
We concluded the chapter by covering the various network models that Windows is deployed in. You learned about various authentication methods, how to access resources, and how to configure the built‐in firewall to allow applications to be accessed via the network.
WINDOWS.OLD
. Applications then have to be
reinstalled, and user data has to be migrated from the old system using
tools such as USMT. An upgrade preserves the existing applications and
the user data, moving them into the new operating system.cd
, dir
,
md
, rmdir
,
ipconfig
,
ping
,
hostname
,
netstat
,
nslookup
,
chkdsk
,
net user
,
net use
,
tracert
,
format
, xcopy
,
copy
,
robocopy
,
gpupdate
,
gpresult
,
shutdown
,
sfc
,
diskpart
,
pathping
, and
winver
. The cmd
command
opens a command prompt, where you can type the rest of the commands. If
you're not sure how to use a particular utility, using the
/?
switch at the end of the command will provide
information about how to use it.The answers to the chapter review questions can be found in Appendix A.
xcopy
copy
chkdsk
robocopy
winver.exe
utility and it reported Windows 10 Version 1703
(OS Build 15063.145). What is the current date of the last
update?
Microsoft.com
.ping
nslookup
pathping
tracert
regedit.exe
msinfo32.exe
msconfig.exe
dxdiag.exe
diskpart
format
chkdsk
sfc
C:\WINDOWS
netstat
ipconfig
pathping
nslookup
You will encounter performance‐based questions on the A+ exam. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exam. To see how your answers compare to the authors’, refer to Appendix B.
You need to join a Windows 11 workstation to an Active Directory domain. What are the steps you need to follow to complete the task?
THE FOLLOWING COMPTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
CompTIA has acknowledged that system administrators and technicians are increasingly dealing with more than just Windows on a daily basis. Therefore, they have included objectives based on macOS and Linux.
This chapter looks at the non‐Windows operating systems from the standpoint of what you need to know to pass the exam. All the topics relevant to objectives 1.10 and 1.11 of the 220‐1102 exam are covered.
In the beginning there was UNIX. UNIX System 5 (version 5) is an operating system originally created and licensed by AT&T Labs. The UNIX operating system is considered to be the root of all UNIX‐based operating systems. In the mid‐1970s, the University of California at Berkeley (UC Berkeley) licensed UNIX from AT&T for their computer systems and expanded on the tools shipped in the original version of UNIX. These tools became the foundation of UNIX as it is today, but UC Berkeley only licensed the operating system for specific machines. The students quickly became upset and developed and released a version of UNIX called the Berkeley Software Distribution (BSD). The term distribution is used today with UNIX/Linux operating systems to define the operating system and its ecosystem for application management, patching, and upgrades.
Although it began with UNIX, BSD became very popular because it was an open source license. This allowed everyone to use the operating system on any computer system they wished. In the mid‐1980s, Steve Jobs created a company called NeXT and built computers that furnished the NeXTSTEP operating system. The NeXTSTEP operating system was originally built from BSD version 4.3. Unfortunately, the NeXT computer company never really took off. However, the NeXTSTEP operating system was acquired by Apple and eventually became the macOS we know today.
Linux has a very different origin story from macOS. Actually, Linux has nothing to do with the original codebase of UNIX. In the mid‐1990s a Finnish student named Linus Torvalds set out to create a completely open source operating system for the world to use. Linux was the result of his efforts; it was designed from scratch, so it was completely free for anyone to use or incorporate into their own products. Today you can find a great number of Linux distributions, such as Ubuntu, Debian, Arch Linux, Gentoo, Red Hat…and the list goes on.
Although Linux has a completely different codebase from BSD and UNIX, the operating system itself functions similarly. Only the kernel and interworkings of the OS are different. Many of the applications that were created by students on the BSD platform were ported over to Linux. Functionally, the operating systems are very similar in design and usability.
The applications that are available or installed on an operating system indirectly define an OS by extending functionality to the end user. As with the case of UNIX, BSD, Linux, and macOS, all of the basic command‐line applications are similar in functionality, as you will learn in this chapter. These command‐line applications are preinstalled with the OS distribution. When you need functionality that isn't part of the base OS, you have a few different ways to install applications.
macOS contains an application ecosystem called the App Store, shown in Figure 16.1. From the App Store you can download and buy applications for macOS. Some applications might be free from the App Store, but they contain in‐app purchases. They are called freemium games and applications, and they typically use this feature.
Applications installed through the App Store do not function any differently from legacy downloaded applications. Unfortunately, not all applications are available through the App Store. The developer of an application must publish it on the App Store, and not every vendor will submit their application. This is mainly due to the costs related to publishing an app to the App Store.
The benefit of using the App Store to install applications for macOS is if you purchase an app and have two devices, then you need to purchase the application only once. This is assuming you are logged in with the same Apple ID that purchased the application. Also, updates for the applications are automatically installed, compared to downloading applications from the developer directly. The Updates section in the App Store will show you all the applications containing updates.
When you first turn on your Apple device and run through the installation, it will ask for an Apple ID. You really can't move on with the setup without an Apple ID. The Apple ID is your digital identity on the device, and it's what also ties your Apple Wallet to the App Store for purchases. The Apple ID will contain credit card information that can be used for purchases.
FIGURE 16.1 macOS App Store
An organization may use mobile device management (MDM) software to control the installation of applications from the App Store or any software installations in general. An organization might also use the App Store as a distribution point for applications that its employees are expected to use. The organization's MDM software is associated with the employee's Apple ID, which is usually the employee's email address. These features were introduced in macOS starting with version 10.9. New features are introduced in every version of macOS.
Installing downloadable applications on macOS is not much different from the same process in Windows. The process consists of three steps: providing the installation to the operating system, mounting the installation, and then installing the applications.
The first step requires you to provide the application's installation files to the operating system. This step is normally done nowadays as a download, since we always want the latest updated applications. The process is pretty routine, as you can imagine; you navigate to a web page and download the app. However, providing the application might also be a simple as plugging in a USB drive or loading a DVD. macOS will automatically mount the USB drive or DVD onto the desktop and open the root folder.
The second and third step in the installation process is to launch the file you downloaded and install the application. When you launch the file, what will happen next depends on the type of file. There are a number of different file types that can be downloaded from the Internet, such as ZIP, ISO, DMG, PKG, or APP, just to name a few. The most common files for macOS applications are as follows:
In Exercise 16.1, you will install an application on macOS, to understand the process.
The Applications folder contains all the applications installed on macOS, as shown in Figure 16.2. You can view and manage applications inside this folder. The two most common methods of launching the folder are using the Finder app and selecting Go ➢ Applications or pressing Shift+Command+A. This key sequence will also launch the Applications folder and requires no shuffling of applications to get to the Finder app.
FIGURE 16.2 macOS Applications folder
From the Applications folder, you can delete user‐installed applications. However, preinstalled applications in this folder cannot be deleted by the user, since they are technically part of the operating system. These applications are similar to the built‐in applications on Windows.
To delete a user‐installed application, select the folder; then on the Finder menu, select File ➢ Move To Trash. This will place the application into the Trash and remove it from the operating system. Another method is to select the application and press Command+Delete. In Exercise 16.2 you will uninstall an application, to understand the process better.
The last and most important topic we will cover about applications is the availability of the application. Creating shortcuts is an essential part of your overall workflow. You should be able to launch an application in as few clicks as possible. To create a shortcut, launch the Applications folder by pressing Shift+Command+A and drag the application to the Dock.
Regardless of the operating system, there are a number of best practices that an administrator should always follow. Depending on the operating system (and distribution, version, edition, and so on), it may be possible to perform operations with the utilities provided, or third‐party utilities may be needed.
Backups are duplicate copies of critical information, ideally stored in a location other than the one where the information is currently stored. Backups include both paper and computer records. Computer records are usually backed up using a backup program, backup systems, and backup procedures.
The primary starting point for disaster recovery involves keeping current backup copies of important data files, databases, applications, and paper records available for use. Your organization must develop a solid set of procedures to manage this process and to ensure that all critical information is protected.
Computer files and applications should be backed up on a regular basis. Here are some examples of critical files that should be backed up:
This list isn't all‐inclusive, but it provides a place for you to start.
In most environments, the volume of information that needs to be stored is growing at a tremendous pace. Simply tracking this massive growth can create significant problems.
You might need to restore information from backup copies for any number of reasons. Some of the more common reasons are as follows:
The information that you back up must be immediately available for use when needed. If a user loses a critical file, they won't want to wait several days while data files are sent from a remote storage facility. Several types of storage mechanisms are available for data storage:
Working Copies Working copy backups, sometimes referred to as shadow copies, are partial or full backups that are kept at the computer center for immediate recovery purposes. Working copies are frequently the most recent backups that have been made.
Typically, working copies are intended for immediate use. They are usually updated on a frequent basis.
Many filesystems used on servers include journaling. A journaled file system (JFS) includes a log file of all changes and transactions that have occurred within a set period of time (such as the last few hours). If a crash occurs, the operating system can check the log files to see which transactions have been committed and which transactions have not.
This technology works well, allowing unsaved data to be written after the recovery, and the system is usually successfully restored to its pre‐crash condition.
On‐Site Storage On‐site storage usually refers to a location on the site of the computer center that is used to store information locally. On‐site storage containers are available that allow computer cartridges, tapes, and other backup media to be stored in a reasonably protected environment in the building.
On‐site storage containers are designed and rated for fire, moisture, and pressure resistance. These containers aren't fireproof in most situations, but they are fire‐rated. A fireproof container should be guaranteed to withstand damage regardless of the type of fire or temperature, whereas fire ratings specify that a container can protect the contents for a specific amount of time in a given situation.
If you choose to depend entirely on on‐site storage, make sure that the containers you acquire can withstand the worst‐case environmental catastrophes that could happen at your location. Make sure, as well, that they are in locations where you can easily find them after the disaster and access them (near exterior walls, on the ground floor, and so forth).
Your determination of which storage mechanism to use should be based on the needs of your organization, the availability of storage facilities, and your budget. Most off‐site storage facilities charge based on the amount of space required and the frequency of access needed to the stored information.
When files are written to a hard drive, they're not always written contiguously or with all the data located in a single location. When discussing Windows, we talked about Disk Defragmenter, which has existed in almost all versions of Windows, and its ability to take file data that has become spread out over the disk and put it all in the same location, a process known as defragmenting. This process decreases the time it takes to retrieve files.
As opposed to FAT‐ and NTFS‐based filesystems, the filesystems used on macOS and Linux rarely, if ever, need to be defragmented. The ext3 and ext4 filesystems are common to Linux, and Apple File System (APFS) is common to macOS. They all have on‐the‐fly defragmentation methods and implement file allocation strategies differently from their traditional Windows counterparts.
It is important to keep the operating system current and updated. Like Windows, many other operating systems include the ability to update automatically, and almost all can look for updates and tell you when they are available. In the Apple world, the App Store represents a location where you can also find updates.
For example, Figure 16.3 shows that a new version of iOS is available on an iPhone. By clicking on Learn More, you display the reasons the new version has been released, allowing you to read about the changes and decide whether you want to upgrade. The update notification usually includes a Learn More option and a link to Apple's release page for the iOS update.
FIGURE 16.3 Apple iOS software update
To access the Software Update area on an iPhone or iPad, choose Settings ➢ General ➢ Software Update. However, when an update is available, you will generally see a prompt on your home screen. To access the Software Update section on macOS, choose System Preferences from the Apple menu, and then click Software Update. In most cases, unless a production device would be negatively impacted, you should keep systems updated with the latest releases.
As a general rule, updates fix a lot of issues and patches fix a few; multiple patches are rolled into updates. You can't always afford to wait for updates to be released and should install patches—particularly security‐related patches—when they are released. Bear in mind that if all the security patches are not installed during the OS installation, attackers can exploit the weaknesses and gain access to information.
A number of tools are available to help with patch management,
although the intentions of some are better than others. For example,
rather than probe a service remotely and attempt to find a
vulnerability, the Nessus vulnerability scanner (https://www.tenable.com/downloads/nessus
)
will query the local host to see if a patch for a given vulnerability
has been applied. This type of query is far more accurate (and safer)
than running a remote check. Since remote
checks actually send the exploit in order to check to see if it is
applicable, this can sometimes crash a service or process.
Depending on the variant of Linux you are running, APT (Advanced
Package Tool) can be useful in getting the patches from a
repository site and downloading them for installation on Debian and
Ubuntu (just to name a few). The most common command used with this tool
is apt‐get
, which, as the name implies, gets the package
for installation, as shown in Figure 16.4. The Yellowdog
Updater, Modified (YUM) tool is used with Red Hat Package Manager
(RPM)‐based Linux distributions, such as CentOS, Fedora,
and Red Hat, and works in a similar way to APT.
FIGURE 16.4 Ubuntu
apt‐get
tool
With any operating system, it is essential to keep the drivers and firmware updated. Always remember to back up your configurations (such as for routers) before making any significant changes—in particular, a firmware upgrade—in order to provide a fallback in case something goes awry.
Many network devices contain firmware with which you interact during configuration. For security purposes, you must authenticate in order to make configuration changes and do so initially by using the default account(s). Make sure that the default password is changed after the installation on any network device; otherwise, you are leaving that device open for anyone recognizing the hardware to access it using the known factory password.
Updating firmware for macOS is performed via software updates. During the software update process, the firmware and the software that corresponds to it are updated. Updating firmware for the hardware installed on Linux computers will vary significantly, depending on the type of hardware. Many enterprise Linux vendors, such as Red Hat, include firmware updates in their software updates. As a rule, however, firmware is not part of the Linux software update process.
At one point in time, there were so few viruses outside the Windows world that users not running Windows felt safe without protection on their systems. A significant reason for the low amount of non‐Windows malware was that the authors of such devious programs were focusing on Windows simply because it had the lion's share of the market; they wanted to inflict as much harm as possible with their code.
As other operating systems have increased in popularity, so too have the number of malware items written for them or that can affect them. Because of this, today it is imperative to have protection on every machine. Additionally, this protection—in the form of definition files—must be kept current and up‐to‐date. Antivirus and antimalware definitions are released by the hour as new viruses and malware are identified. Most operating systems check daily for updates to definitions. Chapter 17, “Security Concepts,” discusses security and antivirus/antimalware in more detail.
There are a number of tools to be aware of in macOS and Linux. Most of these have counterparts in the Windows world, and we'll make comparisons where they apply. Tools are released on a daily basis, but the following are the most important for the CompTIA objectives and daily maintenance:
Image Recovery As a general rule, images are typically larger than snapshots. You can take a snapshot of a project, and that will include all the files associated with the project, whereas an image would include the project files and all files on the system at the time. Again, this is only a general rule, since images can be granular as well. Typically, however, snapshots are thought of as subsets of images.
The macOS Disk Utility can be used to create an image of the
macOS operating system, and the image can be directed to an external
storage device. Linux can use a multitude of open source tools to create
an image of the operating system. The most common is the dd
command.
du
, which shows how much disk
space is in use; df
, which shows how much space is free;
and fsck
, which checks and repairs disks. Although these
command‐line tools are available on both macOS and Linux, Disk Utility
is available only on macOS. The Linux operating system can use a variety
of open source disk utilities.
csh
(C‐shell), ksh
(Korn
shell), and a number of others are also in use. In the macOS/iOS world,
OpenSSH (open shell) is often downloaded and installed. The
macOS Terminal utility is accessible by going to Finder ➢ Devices ➢
Applications ➢ Utilities ➢ Terminal.Like any other modern‐day operating system, macOS is highly customizable. macOS has a feature called System Preferences that allows you to customize the operating system, similar to the Windows 10/11 Settings app and Control Panel applets. In this section, we will cover the most important System Preferences. After this section, you'll understand how to configure macOS and personalize it for your needs.
All preferences are accessed through the System Preferences screen, shown in Figure 16.5. The System Preferences screen can be accessed in a number of ways. The easiest way to launch it is from the Dock, but you can also launch it by clicking the Apple icon in the upper left of the screen and selecting System Preferences.
FIGURE 16.5 macOS System Preferences
The number of icons on the System Preferences screen depends on the applications installed on the operating system and if they are configurable. Let's explore the various System Preferences that you need to know for the CompTIA exam:
FIGURE 16.6 Displays preference
Network The Network preference contains settings pertaining to the network connectivity for the device, as shown in Figure 16.7. In the figure, Wi‐Fi (wireless) is the main connectivity method. If the device used a wired connection, the connectivity method would be Ethernet. You can open the Network preference by clicking the Apple icon in the upper left of the desktop and selecting System Preferences and then Network.
FIGURE 16.7 Network preference
The Network preference allows you to create location‐based preferences. You can make a number of changes to the network settings and assign the changes to a location. For example, you could have a location of work and home. At work you might turn off Bluetooth and at home Bluetooth might be on.
The Network preference is also where you can join wireless networks and change how you join wireless networks. By default, the device will automatically join the network selected. However, there are circumstances where you would not want to automatically join the network, such as if you were directly connecting to a wireless device like a camera that broadcasts its own Service Set Identifier (SSID). If the primary wireless network was still set to automatically join, the device would keep disconnecting from the camera to join the primary network.
If you click Advanced, you can change your primary wireless networks and specify whether they are auto‐joined, as shown in Figure 16.8. In addition to changing advanced properties for the wireless connection, you can select the TCP/IP tab and statically set the IP address. The DNS tab allows you to change the DNS servers to be queried. The WINS tab is for a deprecated Windows service, Windows Internet Name Service, that permits network browsing via broadcasts. The 802.1X tab allows you to set up 802.1X profiles for network‐level security. The Proxies tab allows you to configure proxy servers for traffic on the device, as well as bypass local and select addresses. Finally, the Hardware tab allows you to set special characteristics based on the device.
FIGURE 16.8 Advanced Network preferences
Each device will have various methods for network connectivity. Depending on the various methods, the Network preferences will differ depending on the connection. The Advanced settings may also differ depending on the type of connectivity method for the device.
FIGURE 16.9 Printers & Scanners preference
Printers can be added and removed with this System Preference by clicking the + and – on the print preference screen. You can then select the device and open the print queue by clicking Open Print Queue. This will allow you to see all the print jobs currently waiting to be printed from the local device. You can also click Options & Supplies and view the various options for the printer and check the ink or toner supply levels. In addition, you can share the printer by selecting Share This Printer On The Network. You can then configure the Sharing preferences and choose who can print to the printer.
Many printers today can also be purchased as multifunction copiers (MFCs), which means the printer doesn't just print—it can copy and scan as well. If the device is capable of scanning, a Scan tab will be available after you select the device. Depending on the MFC device attached, the Scan tab will allow you to configure various settings. Although MFC devices are becoming common, if a stand‐alone scanner was connected to the device, it would show in the Printers & Scanners preferences. The device would have only one tab for scanning.
In addition to changing settings for the printers and scanners attached, you can change the default printer and the default paper size. The default printer is set to the last printer used.
Security & Privacy The Security & Privacy preference contains a number of settings that apply to the security of the device and the privacy of the user. The General tab allows you to change the password for the current user, as well as configure how long after the screen saver begins the user is asked to enter their password before logging back in, as shown in Figure 16.10. Many advanced settings in Security & Privacy require the system to be unlocked by clicking the lock in the lower left and entering the administrator password.
FIGURE 16.10 Security & Privacy
A lock screen message can also be set that displays when the screen is locked. Automatic login is disabled by default, but you can change the setting to allow automatic login of the workstation on boot‐up. The applications downloaded on the device can also be controlled; you can select whether apps can be downloaded only from the App Store or from App Store And Identified Developers, which is the default.
The FileVault tab allows you to configure disk‐level encryption to protect your files in the event the device is lost. You can turn on the FileVault feature by clicking Turn On FileVault. Once you do, you will need to unlock the disk with the user's password. If there are multiple users configured on the device, they will need to verify their passwords before the encryption is performed.
The Firewall tab allows you to turn on and configure the built‐in firewall for macOS. It is not turned on by default, but you can turn it on by clicking Turn On Firewall and then configure the firewall options. You can choose to block all inbound connections by default and create exceptions for only the applications you choose. By default, after turning on the firewall, the operating system allows all inbound connections to applications running on the system. You must choose to block all inbound connections and configure the exceptions.
The Privacy tab allows you to configure settings related to privacy for the user's account, as shown in Figure 16.11. You can control location services that relay information about your location to services like Siri. You can also control the applications that request access to your data, such as Contacts, Calendars, Reminders, and Photos, just to name a few. You can add or remove the applications that need or have access to your data.
Accessibility The Accessibility preference allows you to customize macOS to support requirements such as vision, hearing, or motor skills, as shown in Figure 16.12. You can turn on VoiceOver, which will provide spoken commands as well as descriptions of items in Braille. Turn on Zoom to allow zooming with the use of the keypad. The Display settings allow you to turn on display features for high contrast as well as reduce motion. The Spoken Content section allows you to configure the operating system to speak announcements, sections, items, and typing feedback. Use the Descriptions section to turn on additional spoken content.
The Audio section allows you to visually indicate when audio alerts are being played so that the user does not miss an alert. The Captions section will display captions (subtitles) for the operating system.
Use the Voice Control section to enable the ability to speak to the computer to display and edit text. The Keyboard section lets you enable sticky keys and slow keys. The Pointer Control section lets you control how the trackpad operates by changing click speed. In the Switch Control section, you can specify that an adaptive device (such as a joystick) be used to control your Mac, enter text, and interact with items on your screen.
The Siri section allows you to configure Siri to accept typed requests in lieu of spoken requests. In the Shortcut section, you can specify that a shortcut list appear when you press Option+Command+F5.
FIGURE 16.11 Privacy tab preferences
FIGURE 16.12 Accessibility System Preferences
FIGURE 16.13 Time Machine preferences
There are a number of macOS features CompTIA wants you to know for the exam. You don't have to know the intricacies of each, but you should know their purpose. Be familiar with these features:
FIGURE 16.14 Viewing multiple apps with Mission Control
FIGURE 16.15 Apple Keychain Access utility
FIGURE 16.16 Apple Spotlight utility
Gestures With Apple products, it is possible to scroll, tap, pinch, and swipe to interact with the macOS or other products in a way that is intended to be natural and intuitive. You can accept the default actions for these gestures, or you can configure them differently, as shown in Figure 16.18.
To see the basics of gestures on a macOS, visit https://support.apple.com/en-us/HT204895
.
FIGURE 16.17 iCloud configuration settings on macOS
FIGURE 16.18 Settings for default gestures on macOS
FIGURE 16.19 Apple Finder utility
FIGURE 16.20 Apple Remote Disc
https://support.apple.com/en-us/HT201730
.
FIGURE 16.21 Apple macOS Dock
There are many other features of macOS that make it a powerful operating system to use. The aforementioned features, however, are the ones that CompTIA wants you to be aware of for the exam. Make sure that you know the purpose of each as you prepare.
The best way to approach the following commands is to think about Microsoft Windows. That operating system offers a plethora of utilities for configuring the workstation, and just in case they don't work, or you want to go about it the hard way, you can use command‐line utilities to accomplish similar tasks. The odds are good that you spend most of your time walking through graphical dialog boxes but you're familiar enough with the command‐line utilities that you can use them when you need to.
Linux is the same way. There is an overabundance of graphical utilities that can be used to configure the system, and they differ based on the distribution and the graphical interface being used. In addition to these, command‐line utilities are available in every distribution that can be used to get the job done. Those command‐line utilities are what we'll focus on here.
There is only one vendor for Windows (Microsoft), but there are many vendors for Linux (Red Hat, SuSE, Ubuntu—to name just three). Also, a new version of Windows is released only every few years (Windows 7, Windows 8/8.1, Windows 10, Windows 11), but with Linux—especially because there are so many vendors—there are a lot of versions. With Ubuntu, for example, the goal is to release a new version every six months.
With all the different distributions and versions, getting to the place where you can run command‐line utilities can differ a bit. In almost every implementation of Linux, you can boot into a command‐line mode, and the commands entered there can then be run. Better than that, though, the easiest way to get to the command line is to open a terminal (also called a console) window. This allows you to interact with the shell, where you can type commands to your heart's content. The default shell in many Linux distributions is Bash. When you open a terminal window or log in at a text console, the Bash shell is what prompts you for commands. When you type a command, the shell executes your command.
Because a shell interprets what you type, knowing how the shell processes the text you enter is important. All shell commands have the following general format (some commands have no options):
command [option1] [option2] … [optionN]
On a command line, you enter a command followed by zero or more
options (also called arguments). The shell uses a
blank space or a tab to distinguish between the command and options.
This means that you must use a space or a tab to separate the command
from the options and the options from one another. If an option contains
spaces, you put that option inside quotation marks. For example, to
search for a name in the password file, enter the following
grep
command (grep
is used for searching for
text in files):
grep "Jon B" /etc/passwd
When grep
prints the line with the name, it looks like
this:
filea:x:1000:100:Jon B:/home/testuser:/bin/bash
If you create a user account with your username, type the
grep
command with your username as an argument to look for
that username in the /etc/passwd
file. In the output from
the grep
command, you can see the name of the shell
(/bin/bash
) following the last colon (:). Because the Bash
shell is an executable file, it resides in the /bin
directory; you must provide the full path to it.
The number of command‐line options and their formats depend on the
actual command. Typically, these options look like ‐X
,
where X
is a single character. For example, you can use the
‐l
option with the ls
command. The command
lists the contents of a directory, and the option provides additional
details. Here is a result of typing ls ‐l
in a user's home
directory:
total 0
drwxr-xr-x 2 testuser users 48 2018–09–08 21:11 bin
drwx— 2 testuser users 320 2018–09–08 21:16 Desktop
drwx— 2 testuser users 80 2018–09–08 21:11 Documents
drwxr-xr-x 2 testuser users 80 2018–09–08 21:11 public_html
drwxr-xr-x 2 testuser users 464 2018–09–17 18:21 sdump
If a command is too long to fit on a single line, you can press the backslash key (\) followed by Enter. Then continue typing the command on the next line. For example, type the following command (press Enter after each line):
cat \
/etc/passwd
The cat
command then displays the contents of the
/etc/passwd
file.
You can concatenate (that is, string together) several shorter
commands on a single line by separating the commands with semicolons
(;
). For example, the following command changes the
current directory to your home directory, lists the contents of
that directory, and then shows the name of that directory:
cd; ls -l; pwd
You can combine simple shell commands to create a more sophisticated
command. For example, suppose you want to find out whether a device file
named sdb
resides in your system's /dev
directory because some documentation says that you need that device file
for your second hard drive. You can use the ls /dev
command
to get a directory listing of the /dev
directory, and then
browse through it to see whether that listing contains
sdb
.
Unfortunately, the /dev
directory has a great many
entries, so you may find it hard to find any item that has
sdb
in its name. You can, however, combine the
ls
command with grep
and come up with a
command line that does exactly what you want. Here's that command
line:
ls /dev | grep sdb
The shell sends the output of the ls
command (the
directory listing) to the grep
command, which searches for
the string sdb
. The vertical bar (|
) is known
as a pipe because it acts as a conduit (think of a water pipe)
between the two programs—the output of the first command is fed into the
input of the second command.
Literally hundreds, if not thousands, of Linux commands exist within the shell and the system directories. Fortunately, CompTIA asks that you know a much smaller number than that. Table 16.1 lists common Linux commands by category.
Command name | Action |
---|---|
Managing files and directories | |
cd |
Changes the current directory. |
chmod |
Changes file permissions. |
chown |
Changes the file owner and group. |
cp |
Copies files. |
ls |
Displays the contents of a directory. |
mkdir |
Creates a directory. |
mv |
Renames a file and moves the file from one directory to another. |
rm |
Deletes files. |
pwd |
Displays the current directory. |
Processing files | |
cat |
Displays the contents of a file. |
df |
Displays the total disk free (disk free space) for a directory. |
dd |
Copies blocks of data from one file to another (used to copy data from devices). |
find |
Searches for text in a file hierarchy. |
grep |
Searches for regular expressions in a text file. |
nano |
Text‐based editor for files. |
Managing files | |
apt‐get |
Downloads files from a repository site. |
yum |
Downloads files from a repository site. |
shutdown |
Shuts down Linux. |
vi |
Starts the visual file editor, which can be used to edit files. |
man |
System help for executable files. |
Managing users | |
passwd |
Changes the password. |
su |
Starts a new shell as another user. (The other user is assumed to be root when the command is invoked without any argument.) |
sudo |
Runs a command as another user (usually the root user). |
Networking | |
dig |
DNS query utility. |
ip |
Allows you to display and configure information related to a network interface card (NIC). |
ifconfig |
Allows you to display and configure information related to a network interface card (NIC). |
iwconfig |
Similar to ifconfig , but used for wireless
configurations. |
Quitting | |
q |
While not a utility, the q command is
often used to quit most interactive utilities. It is used, for example,
to quit working in the vi editor. |
Managing processes | |
ps |
Displays a list of currently running processes. |
kill |
Terminates a process. |
top |
Displays running processes, similar to Windows Task Manager. |
TABLE 16.1 Essential Linux commands
When you want to do anything that requires a high privilege level (for example, administering your system), you have to become root. Normally, you log in as a regular user with your everyday username. When you need the privileges of the super user, though, use the following command to become root:
su ‐
The su
command followed by a space and the minus sign
(or hyphen) provides an environment similar to what the user would
expect by applying the user's environment variables and initial login
scripts. Once executed, the shell then prompts you for the root
password. Type the password and press Enter.
After you've finished with whatever you want to do as root (and you have the privilege to do anything as root), type exit to return to your normal username.
Instead of becoming root by using the su ‐
command, you
can type sudo
followed by
the command that you want to run as root. In some distributions, such as
Ubuntu, you must use the sudo
command because you don't get
to set up a root user when you install the operating system. If you're
listed as an authorized user in the /etc/sudoers
file,
sudo
executes the command as if you were logged in as root.
Type man sudoers
to read
more about the /etc/sudoers
file.
The Linux operating system is a secure and functional operating system. However, it's inevitable that at some point you will need to obtain updates for security or to extend the functionality by adding packages. Thankfully, every Linux distribution has its own unique repository of security patches and additional packages. When choosing a Linux distribution, you should spend time exploring its benefits. For example, Debian packages are more stable, but the disadvantage is that the packages are older than the current version of packages. Mint Linux is tailored to laptops and desktops, but it is not normally found on servers. These are just a few examples—the lesson is to do your homework before you decide on a Linux distribution.
Once you have chosen and installed a Linux distribution, you'll want to update the repositories and then upgrade the distribution to get the latest security patches. A repository is a group of packages that are available for download. The repository contains metadata, such as versions, dates, descriptions, and dependencies, about these packages. When you update the repositories, you are downloading the metadata so that you can search and begin the upgrade process.
Depending on the version, you will have one of
two tools: the Advanced Package Tool (APT) or Yellowdog Updater,
Modified (YUM). Linux distributions, such as Ubuntu, Debian, and Mint,
will use the APT package management tool. In the operating system you
can update the repositories by using the apt
command or the
apt‐get
command. In addition to upgrading the operating
system, you can install and manage packages. In order to update the
operating system, you will need to update the repositories first and
then you can upgrade the operating system binaries as shown in the
following:
user@server:~$ sudo apt-get update
[sudo] password for user:
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic-security InRelease
Reading package lists… Done
user@server:~$ sudo apt-get upgrade
Reading package lists… Done
Building dependency tree
Reading state information… Done
Calculating upgrade… Done
The following packages have been kept back:
base-files netplan.io sosreport ubuntu-advantage-tools ubuntu-server
The following packages will be upgraded:
accountsservice apport apt apt-utils bash bcache-tools bind9-host bsdutils
[ output cut]
unattended-upgrades update-manager-core update-notifier-common ureadahead
util-linux uuid-runtime vim vim-common vim-runtime vim-tiny wget xkb-data
241 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
Need to get 111 MB of archives.
After this operation, 43.2 MB of additional disk space will be used.
Do you want to continue? [Y/n]
In the following example, you can see how a package such as
iftop
is installed using the apt‐get
command:
user@server:~$ sudo apt-get install iftop
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following NEW packages will be installed:
iftop
0 upgraded, 1 newly installed, 0 to remove and 246 not upgraded.
Need to get 36.0 kB of archives.
After this operation, 91.1 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 iftop amd64 1.0~pre4-4 [36.0 kB]
Fetched 36.0 kB in 0s (102 kB/s)
Selecting previously unselected package iftop.
(Reading database … 66991 files and directories currently installed.)
Preparing to unpack …/iftop_1.0~pre4-4_amd64.deb …
Unpacking iftop (1.0~pre4-4) …
Setting up iftop (1.0~pre4-4) …
Processing triggers for man-db (2.8.3-2ubuntu0.1) …
user@server:~$
The YUM package manager is used to update and install packages for Red Hat–based Linux distributions, such as Red Hat Enterprise Server, Fedora, and CentOS, just to name a few. The tool works like the APT tool; the first step is to update the repositories, then you can update the binaries, as shown here:
[root@localhost ~]# yum update
CentOS Stream 8 - AppStream 5.6 MB/s | 16 MB 00:02
CentOS Stream 8 - BaseOS 1.6 MB/s | 6 MB 00:02
CentOS Stream 8 - Extras 35 kB/s | 15 kB 00:00
Dependencies resolved.
Nothing to do.
Complete!
[root@localhost ~]# yum upgrade
Last metadata expiration check: 0:01:40 ago on Fri 22 Oct 2021 09:52:10 PM EDT.
Dependencies resolved.
Nothing to do.
Complete!
[root@localhost ~]#
The yum
command can also be used to install packages. In
the following example, we are using yum
to install the
nano
utility:
[root@localhost ~]# yum install nano
Last metadata expiration check: 0:45:41 ago on Fri 22 Oct 2021 10:51:16 PM EDT.
Dependencies resolved.
==============================================================================
Package Architecture Version Repository Size
==============================================================================
Installing:
nano x86_64 2.9.8-1.el8 baseos 581 k
Transaction Summary
==============================================================================
Install 1 Package
Total download size: 581 k
Installed size: 2.2 M
Is this ok [y/N]:
Now that you know how to update the operating system and install
packages, it's inevitable that you'll run out of space on the operating
system. Fortunately, you can monitor and quickly find out how much space
is free on the disk by using the df
command, otherwise
known as the disk free command. By using the df
command you
can quickly see the percentage of free space, and if you supply the
‐h
argument you'll get results in human‐readable formats of
bytes, as shown here:
user@server:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 393M 1.5M 391M 1% /run
/dev/sda2 20G 6.0G 13G 33% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/loop0 91M 91M 0 100% /snap/core/6350
/dev/loop1 100M 100M 0 100% /snap/core/11993
tmpfs 393M 0 393M 0% /run/user/1000
user@server:~$
Every time the shell executes a command that you type, it starts a
process. The shell itself is a process, as are any scripts or programs
that the shell runs. The ps
command will show a snapshot of
the current processes running as the currently logged‐in user. Use the
ps ax
command to see a list of processes for the entire
operating system. When you type
ps ax
, Bash shows you the
current set of processes. Here are a few lines of output from the
command ps ax ‐cols 132
(the ‐cols 132
option
is used to ensure that you see each command in its entirety):
PID TTY STAT TIME COMMAND
1 ? S 0:01 init [5]
2 ? SN 0:00 [ksoftirqd/0]
3 ? S< 0:00 [events/0]
4 ? S< 0:00 [khelper]
9 ? S< 0:00 [kthread]
19 ? S< 0:00 [kacpid]
75 ? S< 0:00 [kblockd/0]
115 ? S 0:00 [pdflush]
116 ? S 0:01 [pdflush]
118 ? S< 0:00 [aio/0]
117 ? S 0:00 [kswapd0]
711 ? S 0:00 [kseriod]
1075 ? S< 0:00 [reiserfs/0]
2086 ? S 0:00 [kjournald]
2239 ? S<s 0:00 /sbin/udevd -d
[output cut]
6460 ? Ss 0:02 /opt/gnome/bin/gdmgreeter
6671 ? Ss 0:00 sshd: testuser [priv]
6675 ? S 0:00 sshd: testuser@pts/0
6676 p/0 Ss 0:00 -bash
6712 p/0 S 0:00 vsftpd
8002 ? S 0:00 pickup -l -t fifo -u
8034 p/0 R+ 0:00 ps ax—cols 132
In this listing, the first column has the heading PID
,
and it shows a number for each process. PID stands for process ID
(identification), which is a sequential number assigned by the
Linux kernel. If you look through the output of the ps ax
command, you'll see that the init
command is the first
process and has a PID of 1. That's why init
is referred to
as the mother of all processes.
The COMMAND
column shows the command that created each
process, and the TIME
column shows the cumulative CPU time
used by the process. The STAT
column shows the state of a
process: S
means that the process is sleeping, and
R
means that it's running. The symbols following the status
letter have further meanings; for example, <
indicates a
high‐priority process, and +
means that the process is
running in the foreground. The TTY
column shows the
terminal, if any, associated with the process.
The process ID, or process number, is useful when you have to stop an
errant process forcibly. Look at the output of the ps ax
command and note the PID of the offending process. Then use the
kill
command with that process number to stop the process.
For example, to stop process number 8550, type the following
command:
kill 8550
The ps
command will allow you to see processes currently
running at the time you initiate the command. However, if you want to
see an interactive display of processes similar to the Windows Task
Manager, you can use the top
command, shown in Figure
16.22. The top
command allows you to sort by columns
and scroll through the various processes. You can even kill processes
interactively.
FIGURE 16.22 The
top
command
In Linux, when you log in as root, your home directory is
/root
. For other users, the home directory is usually in
the /home
directory. For example, the home directory for a
user logging in as testuser is /home/testuser
. This
information is stored in the /etc/passwd
file. By default,
only you have permission to save files in your home directory, and only
you can create subdirectories in your home directory to organize your
files further.
Linux supports the concept of a current directory, which is
the directory on which all file and directory commands operate. After
you log in, for example, your current directory is the home directory.
To see the current directory, type the
pwd
command.
To change the current directory, use the
cd
command. To change the current
directory to /usr/lib
, type the following:
cd /usr/lib
Then to change the directory to the cups
subdirectory in
/usr/lib
, type the following command:
cd cups
Now if you use the pwd
command,
that command shows /usr/lib/cups
as the current
directory.
These two examples show that you can refer to a directory's name in
two ways: with an absolute pathname or a relative pathname. An example
of an absolute pathname is /usr/lib
, which is an exact
directory in the directory tree. (Think of the absolute pathname as the
complete mailing address for a package that the postal service will
deliver to your next‐door neighbor.) An example of a relative pathname
is cups
, which represents the cups
subdirectory of the current directory, whatever that may be. (Think of
the relative directory name as giving the postal carrier directions from
your house to the one next door so that the carrier can deliver the
package.)
If you type cd cups
in
/usr/lib
, the current
directory changes to /usr/lib/cups
. However, if you type
the same command in /home/testuser
, the shell tries to
change the current directory to /home/testuser/cups
.
Use the cd
command without any arguments to change the
current directory back to your home directory. No matter where
you are, typing cd
at the
shell prompt brings you back home. The tilde character (∼) is an alias
that refers to your home directory. Thus, you can also change the
current directory to your home directory by using the command
cd
∼. You can refer to another user's home directory by
appending that user's name to the tilde. Thus, cd
∼superman
changes the current directory to the home
directory of superman.
A single dot (.
) and two dots (..
), often
referred to as dot‐dot, also have special meanings. A single
dot (.
) indicates the current directory, whereas two dots
(..
) indicate the parent directory. For example, if the
current directory is /usr/share
, you go one level up to
/usr
by typing the following:
cd ..
You can get a directory listing by using the ls
command.
By default, the ls
command, without any options, displays
the contents of the current directory in a compact, multicolumn format.
To tell the directories and files apart, use the ‐F
option
(ls –F
). The output will show the directory names with a
slash (/) appended to them. Plain filenames appear as is. The at sign
(@) appended to a listing indicates that this file is a link to another
file. (In other words, this filename simply refers to another file; it's
a shortcut.) An asterisk (*) is appended to executable files. (The shell
can run any executable file.)
You can see even more detailed information about the files and
directories with the ‐l
(long format) option. The rightmost
column shows the name of the directory entry. The date and time before
the name show when the last modifications to that file were made. To the
left of the date and time is the size of the file in bytes. The file's
group and owner appear to the left of the column that shows the file
size. The next number to the left indicates the number of links to the
file. (A link is like a shortcut in Windows.)
Finally, the leftmost column shows the file's
permission settings, which determine who can read, write, or execute the
file. This column shows a sequence of nine characters, which appear as
rwxrwxrwx
when each letter is present. Each letter
indicates a specific permission. A hyphen (‐
) in place of a
letter indicates no permission for a specific operation on the file.
Think of these nine letters as three groups of three letters
(rwx
), interpreted as follows:
rwx
in this
position, the file's owner can read (r
), write
(w
), and execute (x
) the file. A hyphen in the
place of a letter indicates no permission. Thus, the string
rw‐
means that the owner has read and write permissions but
not execute permission. Although executable programs (including shell
programs) typically have execute permission, directories treat execute
permission as equivalent to use permission: a user must have
execute permission on a directory before they can open and read the
contents of the directory.Thus, a file with the permission setting rwx‐‐‐‐‐‐
is
accessible only to the file's owner, whereas the permission setting
rwxr‐‐r‐‐
makes the file readable by the world.
Most Linux commands take single‐character options, each with a hyphen
as a prefix. When you want to use several options, type a hyphen and
concatenate (string together) the option letters, one after another.
Thus, ls ‐al
is equivalent to ls ‐a ‐l
as well
as to ls ‐l ‐a
.
You may need to change a file's permission settings to protect it
from others. Use the chmod
command to change the permission
settings of a file or a directory. To use chmod
effectively, you have to specify the permission settings. A good way is
to concatenate letters from the columns of Table
16.2 in the order shown (who/action/permission). You use only the
single character from each column—the text in parentheses is for
explanation only.
Who | Action | Permission |
---|---|---|
u (user) |
+ (add) |
r (read) |
g (group) |
‐ (remove) |
w (write) |
o (others) |
= (assign) |
x (execute) |
a (all) |
s (set user ID) |
TABLE 16.2 Letter codes for file permissions
For example, to give everyone read access to all the files in a
directory, pick a
(for all) from the first column,
+
(for add) from the second column, and
r
(for read) from the third column, to come up
with the permission setting a+r
. Then use the set of
options with chmod
, as follows:
chmod a+r *
On the other hand, to permit everyone to execute one specific file, type the following:
chmod a+x filename
Use ls ‐l
to verify that the change took place.
Sometimes you have to change a file's user or group ownership in
order for everything to work correctly. For example, suppose you're
instructed to create a directory named cups and give it the
ownership of user ID lp and group ID sys. You can log
in as root and create the cups
directory with the command
mkdir
as follows:
mkdir cups
If you check the file's details with the ls ‐l
command,
you see that the user and group ownership are both assigned to root. To
change the owner, use the chown
command. For example, to
change the ownership of the cups
directory to user ID
lp and group ID sys, type the following:
chown lp.sys cups
To copy files from one directory to another, use the cp
command. If you want to copy a file to the current directory but retain
the original name, use a period (.
) as the second argument
of the cp
command. Thus, the following command copies the
Xresources
file from the /etc/X11
directory to
the current directory (denoted by a single period):
cp /etc/X11/Xresources .
The cp
command makes a new copy of a file and leaves the
original intact.
If you want to copy the entire contents of a directory—including all
subdirectories and their contents—to another directory, use the command
cp ‐ar
sourcedir destdir
.
(This command copies everything in the
sourcedir
directory to the
destdir
directory.) For example, to copy
all the files from the /etc/X11
directory to the current
directory, type the following command:
cp ‐ar /etc/X11 .
To move a file to a new location, use the mv
command.
The original copy is gone, and a new copy appears at the destination.
You can use mv
to rename a file. If you want to change the
name of today.list
to old.list
, use the
mv
command as follows:
mv today.list old.list
On the other hand, if you want to move the today.list
file to a subdirectory named saved
, use the following
command:
mv today.list saved
An interesting feature of mv
is that you can use it to
move entire directories (with all their subdirectories and files) to a
new location. If you have a directory named data
that
contains many files and subdirectories, you can move that entire
directory structure to old_data
by using the following
command:
mv data old_data
To delete files, use the rm
command. For example, to
delete a file named old.list
, type the following
command:
rm old.list
Be careful with the rm
command, especially when you log
in as root. You can inadvertently delete important files with
rm
.
Sometime you just need to view the contents of a file. To view the
contents of a file, use the cat
command. The
cat
command will display the contents of a file as shown in
the following example:
user@server:~$ cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
user@server:~$
The cat
file can also be used to concatenate multiple
files. For example, you may have two files that you want to join
together. By using the >
redirector, you can redirect
the output of two or more files into one file, as follows.
user@server:~$ cat file1
file1 contents
user@server:~$ cat file2
file2 contents
user@server:~$ cat file2 file1> file3
user@server:~$ cat file3
file2 contents
file1 contents
user@server:~$
The original text‐based editor that came with Linux/UNIX was the
vi
editor, which was short for visual editor. The original
vi
editor was not end‐user–friendly and it often required a
cheat sheet to get anything done. Thankfully, the nano
and
pico
editors were adopted by Linux, which made editing
text‐based config files much easier. You can launch the
nano
editor by typing the nano
command
followed by the file you want to edit, such as
nano file3
. This will launch the editor,
as shown in Figure 16.23.
FIGURE 16.23 The
nano
editor
Once the editor is launched, it acts similar to the Windows Notepad utility. You can use the arrow keys to navigate text; find text; replace text; and cut, copy, and paste text. The editor is also very intuitive; by using the Control key sequences, you can perform all the functions and even get help.
One of the most important functions you can perform on files is
finding them. In Windows we can use the familiar Ctrl+F key sequence to
search for files in a folder structure. In Linux the find
command allows you to do the same. You can find a file in a folder
structure as shown here:
user@server:~$ find -name file4
./folder1/file4
user@server:~$
In exercise 16.3 you will work with some basic files on the Linux operating system to get some exposure to working with files.
To organize files in your home directory, you have to create new
directories. Use the mkdir
command to create a directory.
For example, to create a directory named images
in the
current directory, type the following:
mkdir images
After you create the directory, you can use the
cd images
command to change to that directory.
You can create an entire directory tree by using the ‐p
option with the mkdir
command. For example, suppose your
system has a /usr/src
directory and you want to create the
directory tree /usr/src/book/java/examples/applets
. To
create this directory hierarchy, type the following command:
mkdir ‐p /usr/src/book/java/examples/applets
When you no longer need a directory, use the rmdir
command to delete it. You can delete a directory only when the directory
is empty. To remove an empty directory tree, you can use the
‐p
option, as follows:
rmdir ‐p /usr/src/book/java/examples/applets
This command removes the empty parent directories of applets. The command stops when it encounters a directory that's not empty.
Just as you can use the ipconfig
command to see the
status of IP configuration with Windows, the ifconfig
command can be used in Linux. You can get information about the usage of
the ifconfig
command by using ifconfig ‐help
.
The following output provides an example of the basic
ifconfig
command run on a Linux system:
eth0 Link encap:Ethernet HWaddr 00:60:08:17:63:A0
inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:911 errors:0 dropped:0 overruns:0 frame:0
TX packets:804 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
Interrupt:5 Base address:0xe400
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:3924 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
In addition to using ifconfig
, Linux users can use the
iwconfig
command to view the state of their wireless
network. By using iwconfig
, you can view such important
information as the link quality, access point (AP) MAC address, data
rate, and encryption keys, which can be helpful in ensuring that the
parameters in the network are consistent.
The ifconfig
utility is slowly
being replaced on certain distributions of Linux with the
ip
utility. Red Hat Enterprise Linux has adopted the
ip
utility and it's similar to the ifconfig
utility. You can configure an address or show the configured IP
addresses similar to the ifconfig
utility. The output even
looks very familiar to the ifconfig
utility:
root@sybex:~# ip addr add 172.16.1.200/12 eth0
root@sybex:~# ip addr
eth0: <BROADCAST, MULTICAST, UP> mtu 1500 qlen 1000
link/ether 00:0c:29:e9:08:92 brd ff:ff:ff:ff:ff:ff:ff:ff
inet 172.16.1.200/12 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fee9:892/64 scope link
valid_lft forever preferred_lft forever
lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 noqueue
link/loopback 00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
root@sybex:~#
The Domain Information Groper (dig
) tool is almost
identical to the nslookup
tool and has become an adopted
standard for name resolution testing on Linux/UNIX operating systems.
The tool allows you to resolve any resource record for a given host and
direct the query to a specific server.
The command does not offer an interactive mode like the
nslookup
command. The command by default queries A records
for a given host, and the output has debugging turned on by default.
In the following example, you see a query being performed on the DNS
server of 8.8.8.8 for an MX record of sybex.com
. The debugging output
shows that one query was given, two answers were retrieved, and nothing
was authoritative (not the primary servers). The output also details the
query made and the answers returned.
root@Sybex:~# dig @8.8.8.8 mx sybex.com
; <<>> DiG 9.9.5-3ubuntu0.13-Ubuntu <<>> @8.8.8.8 mx sybex.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49694
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;sybex.com. IN MX
;; ANSWER SECTION:
sybex.com. 899 IN MX 10 cluster1.us.messagelabs.com.
sybex.com. 899 IN MX 20 cluster1a.us.messagelabs.com.
;; Query time: 76 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Wed Nov 01 21:43:32 EDT 2017
;; MSG SIZE rcvd: 104
root@Sybex:~#
Linux is a great alternative to the Windows operating system, but the
command line can be overwhelming. However, once you learn some of these
basic commands it can be very rewarding. From time to time you may need
help, and this is where the man
command comes in handy. The
man
command, also known as the manual command, allows you
to quickly look up the command arguments or usage of a command, as shown
in Figure 16.24. Enter the command
man
, followed by the
command you need information on, to open the manual page. For example,
you can see the manual page for the cat
command by entering
man cat
at the command
line.
The manual pages are installed with any package that is installed on the system. We will only be concerned with command‐line usage, but for programmers, there are manual pages that explain system calls and APIs.
The Microsoft Windows operating system has used the Server Message Block (SMB) protocol to connect clients to file servers and printer servers since the introduction of Windows. Linux has primarily used Network File System (NFS) as its protocol of choice for sharing files between Linux systems. Both SMB and NFS are considered filer protocols; they define how one system accesses files on another system over the network.
FIGURE 16.24 The
man
command
Although Microsoft has started to support NFS as an available protocol on Windows for sharing between Windows and Linux, SMB is the protocol of choice if you primarily work from Windows. This is mainly because the SMB file sharing is easier to set up and is native to the Windows operating system. Samba is a free open source software (FOSS) package that can be installed on Linux to allow the Linux operating system to share the underlying filesystem via the SMB protocol. By installing the Samba package, you can turn a Linux server into a Windows file server that communicates via SMB. The Samba project is even so advanced that you can use Samba in place of an Active Directory (AD) domain controller (DC).
The Samba installation is simple. If the Linux distribution you are
using supports APT package management, then simply issue the command
sudo apt install samba
.
If the Linux distribution you are using supports YUM package management,
then simply issue the command
sudo yum install samba
.
Unfortunately, installing the service is the easy part of the process.
Samba offers so many features that the configuration can be pretty
complex, depending on what you are trying to achieve.
If you are just setting up a simple file share, then the
configuration is pretty straightforward. You will need to edit the
smb.conf
file in the /etc/samba
directory
structure by using the command
sudo nano /etc/samba/smb.conf
. The
following is a sample configuration of a simple file share.
[fileshare]
comment = Samba on Linux
path = /opt/fileshare
read only = no
browsable = yes
The first line, [fileshare]
, is the file share name.
This can be anything, but for this example it is fileshare
.
The comment that follows is for the administrator to explain the purpose
of the share. The path is the local filesystem path that will be shared
out; in this example, it is /opt/fileshare
. The
read only
line configures read and write capability for the
share. The browsable
line configures whether the share is
populated in the NetBIOS browsing process.
After saving the configuration, the Samba service will need to be restarted to pick up the new configuration added to the CONF file. Restarting the service can be achieved by entering the following:
sudo service smbd restart
You may also have to add a firewall rule to allow incoming connections. This can be achieved by entering the following:
sudo ufw allow samba
These examples were from an Ubuntu server, using APT package management. However, the process is the same regardless of the operating system or package management. Install Samba, configure Samba, restart the Samba service, and open the firewall.
This chapter provided an overview of operating systems other than Microsoft Windows. In particular, we looked at macOS and Linux, including the features and various tools included with each that appear on the CompTIA A+ 220‐1102 exam.
We covered the installation and uninstallation of applications in macOS. This included the various methods with which an application can be downloaded and installed. We also discussed the basic management of applications, such as creating shortcuts.
We also covered best practices for both macOS and Linux. There are best practices that technicians and administrators should follow regardless of which operating system(s) they are running, such as backup and antivirus.
The chapter concluded with an examination of some basic Linux commands. Many of the commands can be used in a variety of applications. This chapter covered what you need to know for the 220‐1102 exam.
cd
, pwd
), to create and change file values
(chmod
, chown
), to run commands
(su
, sudo
), and to do many other
tasks.The answers to the chapter review questions can be found in Appendix A.
–l
option), including any hidden files (which requires the
–a
option). Which command should you use?
ls –a | ls ‐l
ls –s; ls ‐l
ls ‐la
ls –a\ls ‐l
ps
nano
rm
ls
dd
apt‐get
ip
pwd
update
apt
patch
cd
chmod
chown
pwd
fsck
chkdsk
du
dumgr
kill
sudo
su
passwd
ifconfig
ls
cat
ps
su
/home/testuser/documents/mail
directory. Which command will take you to
/home/testuser/documents
?
cd .
cd ..
cd . . .
cd
∼rwxrw‐r––
, what permissions apply for a user who is a
member of the group to which the owner belongs?
–p
option with mkdir
do?
You will encounter performance‐based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
By default, not all files and folders in a Linux directory are shown
when you do an ls
listing. Entries that start with a period
(.) are considered “hidden” and not shown. Try this command in your home
directory, and then compare the result with what you see when you don't
use the ‐a
option:
cd /
to change to the
root directory.ls ‐F
to see the files
and directories in the root directory.ls ‐aF
to see everything,
including hidden files.cd ~
to change to your
home directory.ls ‐l
to see the files
and directories in your home directory.ls ‐al
to see everything,
including hidden files.
THE FOLLOWING COMPTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Think of how much simpler an administrator's life was in the days before every user had to be able to access the Internet, and how much simpler it must have been when you only had to maintain a number of dumb terminals connected to a mini‐tower. Much of what has created headaches for an administrator since then is the inherent security risk that comes about as the network expands. As our world—and our networks—have become more connected, the need to secure data and keep it away from the eyes of those who can do harm has increased exponentially.
Realizing this, CompTIA added the Security domain to the A+ exams a number of years back. Security is now a topic that every administrator and technician must not only be aware of and concerned about, but also be actively involved in implementing methods to enforce and monitor. In the world of production, quality may be job one, but in the IT world, it is security.
This chapter, one of two chapters that focus primarily on security, will cover myriad security concepts. First, we will explore the physical aspects of security, and then we will dive deeper into the logical aspects of security. We will then look at how external forces, such as malware, social engineering, and vulnerabilities, can impact security. We will finish this chapter by looking at some common ways that you can safeguard yourself from security breaches. We will cover the proper destruction and disposal methods as well as security measures you can employ in network installations.
Many of the security issues that plague networks today can be solved through the implementation of basic security elements. Some of those elements are physical (e.g., locked doors), and others are digital (e.g., antivirus software), but all share in common the goal of keeping problems out. The following six topic areas are key:
As you study for the exam, know the types of physical security elements that you can add to an environment to secure it. Know, as well, what types of digital security you should implement to keep malware at bay. Understand that the first line of defense is the user. You need to educate users to understand why security is important, and you must impose the principle of least privilege to prevent them from inadvertently causing harm.
Physical security is the most overlooked element of security in a network. A simple lock can keep out the most curious prying eyes from a network closet or server room. A more layered approach can be implemented for higher security installations. However, the simple fact is that not a lot of time is spent on physically securing the network. In the following sections, we will cover the CompTIA objectives related to physical security of networks.
The use of an access control vestibule, also known as a mantrap, helps to prevent nonauthorized users from tailgating. An access control vestibule is a small room that has two controlled doors, as shown in Figure 17.1. When a person enters the first door, they are trapped in the room until they have been authorized to enter the second controlled door. The close proximity between people in this confined space makes it uncomfortable both for the authorized user and for the nonauthorized user attempting to tailgate. In this example, the doors are controlled by radio frequency identification (RFID) readers, which we will cover later in this section.
FIGURE 17.1 A common access control vestibule setup
Identification (ID) badges are used to provide proof of access. Badges can be any form of identification intended to differentiate the holder from everyone else. This can be as simple as a name badge or a photo ID. When the badge contains a photo ID, it is considered the authentication factor of something that you are.
Many ID badges also contain a magnetic strip or RFID provision so that the badge can be used in conjunction with a badge reader. When the information is read by the badge reader, it is sent to an access control system for authorization through the controlled door. A benefit of implementing badge readers is that it creates an electronic audit of all access to an area.
Video surveillance is the backbone of physical security. It is the only detection method that allows an investigator to identify what happened, when it happened, and, most important, who made it happen. Two types of cameras can be deployed: fixed and pan‐tilt‐zoom (PTZ). Fixed cameras are the best choice when recording for surveillance activities. Pan‐tilt‐zoom (PTZ) cameras allow for 360‐degree operations and zooming in on an area. PTZs are most commonly used for intervention, such as covering an area outside during an accident or medical emergency. PTZ cameras are usually deployed for the wrong reasons, mainly because they are cool! PTZs are often put into patrol mode to cover a larger area than a fixed camera can. However, when an incident occurs, they are never pointed in the area you need them! It is always best to use a fixed camera or multiple fixed cameras, unless you need a PTZ for a really good reason. They are usually more expensive and require more maintenance than fixed cameras.
Video surveillance can be deployed using two common media types: coaxial cable and Ethernet. Coaxial cable is used typically in areas where preexisting coaxial lines are in place or distances are too far for typical Ethernet. These systems are called closed‐circuit television (CCTV). Coaxial camera systems generally use appliance‐like devices for recording of video. These CCTV recorders generally have a finite number of ports for cameras and a finite amount of storage in the form of direct‐attached storage (DAS).
Ethernet (otherwise known as IP) surveillance is becoming the standard for new installations. Anywhere an Ethernet connection can be installed, a camera can be mounted. Power over Ethernet (PoE) allows power to be supplied to the camera, so the additional power supplies used with coaxial cameras are not needed. Ethernet also provides the flexibility of virtual local area networks (VLANs) for added security so that the camera network is isolated from operational traffic. IP surveillance uses network video recorder (NVR) software to record cameras. Because NVRs are server applications, you can use traditional storage such as network area storage (NAS) or storage area network (SAN) storage. This allows you to treat the video recordings like traditional data.
Coaxial camera networks can be converted to IP surveillance networks with the use of a device called a media converter. These devices look similar to a CCTV recorder. They have a limited number of ports for the coaxial cameras and are generally smaller than the CCTV recorder. This is because they do not have any DAS. The sole purpose of the media converter is to convert the coaxial camera to an Ethernet feed to the NVR.
The use of IP video surveillance allows for a number of higher‐end features such as camera‐based motion detection, license plate recognition (LPR), and motion fencing. Advanced NVR software allows cameras to send video only when motion is detected at the camera; this saves on storage for periods of nonactivity. LPR is a method of detecting and capturing license plates in which the software converts the plate to a searchable attribute for the event. With motion fencing, an electronic fence can be drawn on the image so that any activity within this region will trigger an alert. Among the many other features are facial recognition and object recognition.
There are several different motion sensor types that you can use to detect unauthorized access. Passive infrared (PIR) is the most common motion detection used today, mainly because of price. PIR sensors operate by monitoring the measurement of infrared radiation from several zones. In Figure 17.2, you can see the reflective panel that divides the infrared zones. A PIR sensor will always have this grid pattern on the sensor's face.
FIGURE 17.2 A typical PIR sensor
Microwave detectors also look like PIR sensors, but they do not have a reflective panel. Microwave detectors are common in areas where wide coverage is needed. Microwave detectors operate by sending pulses of microwaves out and measuring the microwaves received. These detectors are more expensive than PIR sensors and are susceptible to external interference, but they have a wider area of coverage.
Vibration sensors are another type of sensor used for motion detection. Although you may have seen them in the latest over‐the‐top heist movie, vibration sensors are really used in physical security systems. They are most often implemented as seismic sensors. They help protect from natural disasters and accidental drilling, or the occasional over‐the‐top heist.
An alarm system is another type of physical security system. It provides a method to alert security personnel in the event of unauthorized access or a break in. An alarm system can be configured to trigger in the event of an access control system logging unauthorized access to a controlled door. However, it is more common to find alarm systems installed for break‐in detection and response.
An alarm system can be configured to use motion sensors, video surveillance, magnetic contacts, and a multitude of other sensors. Each sensor will be installed in a different logical zone. The perimeter of the building might be zone 1, the server room might be zone 2, and so on. There could be many different zones configured for each sensor. The main purpose of the zone is to communicate the location of the sensor being tripped so that law enforcement agents can respond to that location.
A monitoring company is typically contracted with the installation. The purpose of the monitoring company is to act as a buffer between the zone being tripped and a law enforcement agency. The alarm panel will dial out to a monitoring station and it will transmit the account number and the zone that is tripped. The monitoring station can then process its call‐down list. The call‐down list will typically consist of the phone number of the supervisor of the area, the in‐house security personnel, and ultimately a law enforcement agency. The monitoring company can also monitor the health of the alarm panel, depending on the model of alarm panel and its features.
The most common physical prevention tactic is the use of locks on doors and equipment. This might mean the installation of a tumbler‐style lock or an elaborate electronic combination lock for the switching closet. If a tumbler‐style lock is installed, then the appropriate authorized individuals who require access will need a physical key. Using physical keys can become a problem, because you may not have the key with you when you need it the most, or you can lose the key. The key can also be copied and used by unauthorized individuals. Combination locks, also called cipher locks, can be reprogrammed and do not require physical keys, as shown in Figure 17.3. Combination locks for doors can be purchased as mechanical or electronic.
FIGURE 17.3 A typical combination door lock
There are many different types of equipment locks that can secure the information and the device that holds the information. Simply thwarting the theft of equipment containing data and restricting the use of USB thumb drives can secure information. In the following sections, we will cover several topics that are directly related to the physical aspects of information security.
Cable locks are used to secure laptops and any device with a Universal Security Slot (USS), as shown in Figure 17.4. A cable lock is just that—a cable with a lock at one end. The lock can be a tumbler or a combination, as shown in Figure 17.5. The basic principle is that the end of the lock fits into the USS. When the cable is locked, the protruding slot of metal turns into a cross that cannot be removed. This provides security to expensive equipment that can be stolen due to its portability or size.
FIGURE 17.4 A Universal Security Slot
FIGURE 17.5 A standard cable lock
Most servers come with a latch‐style lock that prevents someone from opening the server, but the tumbler‐style lock is trivial to open. Anyone with a paperclip can open these locks if they have forgotten the keys. Other types of server locks are holes for padlocks that latch through the top cover and the body of the server. However, over the past 10 years, a declining number of servers come with this feature. This is mainly due to the fact that servers can be better secured behind a locked rack‐mounted enclosure. Rack‐mounted enclosures generally come with a tumbler‐style lock that can protect all the servers and network equipment installed in the cabinet, while still providing airflow.
Universal Serial Bus (USB) locks can be put into place to physically lock out USB ports on a workstation or server from use. These devices are extremely rare to find, because most equipment and operating systems allow for the USB ports to be deactivated. USB locks work by inserting a small plastic spacer into the USB port. Once inserted, the spacer latches to the USB detent with plastic teeth. A tool is required to remove the USB spacer.
Physical security begins with personnel—specifically, security‐focused personnel, such as security guards. Security guards should be responsible for limiting access from the outer perimeter of your installation. Security guards typically use photo IDs, also known as ID badges, to allow access to the installation. Exceptions to this are people on the entry control roster; in some secured buildings, only people on the entry control roster are allowed to enter. In this type of scenario, the ID badge is used only to provide ID. This is common in government and sensitive installations.
Fences are a physical security barrier to keep unauthorized persons out of a secure area. Exterior fences can be arranged so that they create a choke point where a guard can inspect credentials to allow authorized personnel into the area. Guards can also be replaced with electronic locks and RFID readers to limit access. When installed in conjunction with a video camera system to surveil the area around the entry point, a fence creates a very secure outer layer for your facility. Fences should be considered the outermost security layer of a multiple‐barrier system.
In addition to an exterior fence, the building that houses the data should have RFID readers and electronic door locks. The innermost area surrounding the equipment can also be segmented with a fence and additional access controls, such as standard keyed locks or electronic access control. When fences are used in the interior of the data center, air quality can be maintained while preventing unauthorized access, as shown in Figure 17.6. Using multiple barriers as described allows contractors for HVAC systems to maintain their systems, while preventing direct physical access to the servers and equipment.
FIGURE 17.6 Interior data center fences
A bollard is an architectural structure that acts as a visual indicator for a perimeter. They are also very sturdy, since their second function is to act as a barrier for the perimeter and protect the area. They are commonly found around areas where a truck or other vehicle can cause damage. In Figure 17.7, the bollard is protecting a fiber‐optic vault from accidental damage by a vehicle. Bollards can also be found in the interior of a building if there is potential for damage to a protected area from a vehicle, such as a forklift or equipment cart.
Organizations should authorize and audit staff access for sensitive areas inside a facility. Implementing physical security for staff is one way you can control access to the physical equipment and the data that is stored on the equipment. By limiting access to the equipment and the underlying data, you can prevent service disruptions or loss of data. This section will focus on ways that you can implement physical security for staff.
FIGURE 17.7 A typical bollard
Key fobs are named after the chains that used to hold pocket watches to clothes. Key fobs are embedded radio frequency identification (RFID) circuits that fit on a set of keys and are used with physical access control systems, as shown in Figure 17.8. They are often used for access to external and internal doors for buildings. Key fobs are close‐proximity devices that authorize the user for entry; an electronic lock is actuated when the device is presented, and the door can be opened. This is an authentication factor of something that you have.
FIGURE 17.8 A key fob
A smartcard is the size of a credit card with an integrated circuit embedded into the card (also called an integrated circuit chip [ICC]). The chip is exposed on the face of the card with surface contacts, as shown in Figure 17.9. Smartcards are used for physical authentication to electronic systems and access control systems and require a PIN or password. A smartcard is considered a multifactor authentication method because it is something you have (card) and something you know (PIN or password). The U.S. military uses smartcards called Common Access Cards (CACs) for access to computer systems and physical access controls.
FIGURE 17.9 A typical smartcard
An RFID badge is a wireless, no‐contact technology used with RFID transponders. RFID badges typically work on the 125 kHz radio frequency and are passively powered by the RFID transponder. When an RFID badge is placed in close proximity to the RFID transponder, the radio frequency (RF) energy emitted by the transponder powers a chip in the RFID badge. The RFID chip then varies the frequency back to the transponder in the effort to transmit its electronic signature (number). This type of authentication is considered something you have.
Physical keys are extremely hard to control and do not allow for the auditing of their usage. A physical key can be lent to someone, copied, stolen, or used by an unauthorized person. Because of the problems surrounding physical keys, their use should largely be avoided.
If keys are absolutely necessary, then a two‐person system should be considered. A two‐person system requires that two people must use their keys to open one lock, although nothing stops the keys from being lent to the same person to open a door.
Another option is to use an electronic lock box for management of the keys. When a technician needs a particular key, they will log into the key box and check out the key needed. This system allows for auditing controls, but it does not prevent copying of keys.
Biometric devices use physical characteristics to identify the user. This type of authentication is considered something that you are. Such devices are becoming more common in the business environment. Biometric systems include fingerprint/palm/hand scanners, retinal scanners, and soon, possibly, DNA scanners. Figure 17.10 shows a typical biometric device. In recent years, several mobile phones have implemented biometrics in the access control of the mobile device. Several manufacturers have adopted fingerprint access control, and some have even adopted facial recognition via the forward‐pointing camera.
FIGURE 17.10 A typical biometric lock
To gain access to resources, you must pass a physical screening process. In the case of a hand scanner, this may include identifying fingerprints, scars, and markings on your hand. Retinal scanners compare your eye's retinal pattern to a stored retinal pattern to verify your identity. DNA scanners will examine a unique portion of your DNA structure to verify that you are who you say you are.
With the passing of time, the definition of biometrics is expanding from simply identifying physical attributes about a person to being able to describe patterns in their behavior. Recent advances have been made in the ability to authenticate someone based on the key pattern that they use when entering their password (how long they pause between keys, the amount of time each key is held down, and so forth). A company adopting biometric technologies needs to consider the controversy they may face. Some authentication methods are considered more intrusive than others. The error rate also needs to be considered, along with an acceptance of the fact that errors can include both false positives, where the reader allows access falsely, and false negatives, where the reader denies access erroneously. Therefore, biometrics is often used with another factor of authentication, such as a PIN number. This approach provides multifactor authentication.
Most security cameras work on the principle of collecting light to record a picture. As light levels decrease, the quality of the picture decreases significantly. Therefore, areas in which you have video cameras should have sufficient levels of lighting. In reality any area that is sensitive should have a level of lighting, since threat agents often hide in darker areas to avoid being seen.
Lighting the area doesn't always require a visible light source. Most cameras sensors built in the last 10 years allow for the collection of light from infrared (IR) light sources. This means that even in the dark you can surveil and record an area, although a visible light source does deter unauthorized access.
The magnetometer, also known as a metal detector, uses an electromagnetic field to detect metallic objects. We have all seen these devices at a choke point in the airport or government building. When a metal detector is used for people entering a facility, you can detect weapons, such as guns or knives. When a metal detector is deployed in this fashion, it protects your staff from threat agents with malicious intent.
A metal detector can also be used to monitor people leaving a facility. The metal detector can monitor staff leaving with equipment. When it is used in this way, it will protect against data loss and theft. However, unless your organization is regulated as a highly classified facility, this approach will be hard to enforce and could infringe on your employees' privacy.
Whereas the topic of physical security concepts, from CompTIA's standpoint, focuses on keeping individuals out, logical security focuses on keeping harmful data and malware out as well as on authorization and permissions. This logical security includes devices and methods that protect the environment logically, such as firewalls, antivirus software, and directory permissions, just to name a few. The areas of focus are antivirus software, firewalls, antimalware, user authentication/strong passwords, and directory permissions. Each of these topics is addressed in the sections that follow.
The principle of least privilege is a common security concept that states a user should be restricted to the fewest number of privileges that they need to do their job. By leveraging the principle of least privilege, you can limit internal and external threats. For example, if a front‐line worker has administrative access on their computer, they have the ability to circumvent security; this is an example of an internal threat. Along the same lines, if a worker has administrative access on their computer and receives a malicious email, a bad actor could now have administrative access to the computer; this is an example of an external threat. Therefore, only the permissions required to perform their tasks should be granted to users, thus providing least privilege.
Security is not the only benefit to following the principle of least privilege, although it does reduce the surface area of attack because users have less access to sensitive data that can be leaked. When you limit workers to the least privilege they need on their computers or the network, fewer intentional or accidental misconfigurations will happen that can lead to downtime or help desk calls. Some regulatory standards require following the principle of least privilege. By following the principle of least privilege, an organization can improve on compliance audits by regulatory bodies.
Access control lists (ACLs) are used to control traffic and applications on a network. Every network vendor supports a type of ACL method; for the remainder of this section, I will focus on Cisco ACLs.
An ACL method consists of multiple access control entries (ACEs) that are condition actions. Each entry is used to specify the traffic to be controlled. Every vendor will have a different type of control logic. However, understanding the control logic of the ACL system allows you to apply it to any vendor and be able to effectively configure an ACL. The control logic is defined with these simple questions:
Let's explore the control logic for a typical Cisco layer 3 switch or router. The conditions of the ACL are evaluated from top to bottom. If a specific condition is not met for the ACL, the default action is to deny the traffic. Only one ACL can be configured per interface, per protocol, and per direction. When you are editing a traditional standard or extended ACL, the entire ACL must be negated and reentered with the new entry. With traditional ACLs, there is no way to edit a specific ACL on the fly. When editing a named access list, each condition is given a line number that can be referenced so that the specific entry can be edited. For the remainder of this section, we will use named access lists to illustrate an applied access list for controlling traffic.
In Figure 17.11 you can see a typical corporate network. There are two different types of workers: HR workers and generic workers. We want to protect the HR web server from access by generic workers.
FIGURE 17.11 A typical corporate network
We can protect the HR server by applying an ACL to outgoing traffic for Eth 0/0 and describing the source traffic and destination to be denied. We can also apply an ACL to the incoming interface of Eth 0/2 describing the destination traffic to be denied. For this example, we will build an access list for incoming traffic to Eth 0/2, blocking the destination of the HR server.
Router(config)# ip access-list extended block-hrserver
Router(config-ext-nacl)# deny ip any host 192.168.1.4
Router(config-ext-nacl)# permit ip any any
Router(config-ext-nacl)# exit
Router(config)# interface ethernet 0/2
Router(config-if)# ip access-group block-hrserver in
This ACL, called block‐hrserver, contains two condition action statements. The first denies any source address to the specific destination address of 192.168.1.4. The second allows any source address to any destination address. We then enter the interface of Eth 0/2 and apply the ACL to the inbound direction of the router interface. The rule will protect the HR server from generic worker access while allowing the generic workers to access all other resources and the Internet.
It is important to note that the focus of this section is to understand how ACLs are used to protect resources. It is not important to understand how to build specific ACLs, since commands will be different from vendor system to vendor system.
This section discusses various components of physical security that control access. When components are used to control access, they do so based on the authentication of people. Authentication can happen in several different ways, as follows:
All authentication is based on something that you know, have, are, or do, or a location you are in. A common factor of authentication is a password, but passwords can be guessed, stolen, or cracked. A fingerprint can be lifted with tape, a key can be stolen, or a location spoofed. No one factor is secure by itself because it can be compromised easily.
When more than one item (factor) is used to authenticate a user, this is known as multifactor authentication (MFA). It may take two, three, or four factors to authenticate, but as long as it is more than one, as the name implies, it is known as multifactor. One of the most common examples where this is used in everyday life is at an ATM. In order to withdraw money, a user must provide a card (one factor) and a PIN number (a second factor). If you know the PIN number but do not have the card, you cannot get money from the machine. If you have the card but do not have the PIN number, you cannot get money from the machine.
In this section we will cover the most common two‐factor (2FA)/multifactor authentication methods use by protected applications. The following methods are generally used in conjunction with a traditional user and password combination. It should be assumed that when we talk about 2FA it provides the same functionality as MFA.
Some applications use email as a 2FA method. However, using email as a 2FA option is probably the least secure method. This is mainly due to the fact that people reuse passwords. If your banking website username and password is compromised (something you know) and you reuse the same credentials on email, it provides no protection. Hopefully the email account is protected with 2FA in a way that it requires something you have.
Email is useful as a notification method when someone logs into a secure login. However, keep in mind the threat agents know this as well. If your email account is compromised, a threat agent will often create a rule in your email box to dump these notifications directly to the trash.
Some applications will allow the use of short message service (SMS) text messages as the 2FA method. When this method is used, a simple text message is sent to the user's phone number. The message will contain a random 5‐ to 8‐digit code that the user will use to satisfy the 2FA requirement. When you first set up this 2FA method, the protected application will request the code before turning on 2FA. This is done to verify that the phone number is correct and that you can receive text messages.
Some applications that are protected by 2FA will allow voice calls to be initiated to the end user. This is usually done if the person does not have a phone that accepts text messages. The voice call will recite a 5‐ to 8‐digit code that the user will use to satisfy the 2FA requirement. This process is similar to SMS, with the difference being it is an automated voice call.
Physical hardware tokens are anything that a user must have on them to access network resources. They are often associated with devices that enable the user to generate a one‐time password (OTP) to authenticate their identity. SecurID from RSA is one of the best‐known examples of a physical hardware token, as shown in Figure 17.12.
FIGURE 17.12 An RSA security key fob
Hardware tokens operate by rotating a code every 60 seconds. This rotating code is combined with a user's PIN or password for authentication. A hardware token is considered multifactor authentication because it is something you have (hardware token) and something you know (PIN or password).
A new type of hardware token is becoming the new standard, and it can be considered a software token or soft token. It operates the same as a hardware token, but it is an application on your cell phone that provides the code. Google Authenticator is one example of these types of applications. Microsoft also has an authenticator application similar to Google Authenticator.
When configuring 2FA on an application, you have two ways of adding an account to the authenticator application. You can take a picture of a quick response (QR) code, or you can enter a security code into the authenticator application. If you use choose to use a QR code, then the application turning the 2FA on will present a QR code that can be scanned by the authenticator application. If you choose to use a setup key, the application turning on the 2FA will provide a key. There is generally a second step before the application is protected by 2FA, where you will be required to enter a code from the authenticator application to the protected application. A lengthy one‐time‐use backup key is also generated, in case you need to turn 2FA off because your device is lost or stolen.
The traditional workforce is slowly becoming a mobile workforce, with employees working from home, on the go, and in the office. Mobile devices such as laptops, tablets, and smartphones are used by employees to connect to the organization's cloud resources. Bring your own device (BYOD) has been embraced as a strategy by organizations to alleviate the capital expense of equipment by allowing employees to use devices they already own.
Because employees are supplying their own devices, a formal document called the BYOD policy should be drafted. The BYOD policy defines a set of minimum requirements for the devices, such as size and type, operating system, connectivity, antivirus solutions, patches, and many other requirements the organization will deem necessary.
Many organizations use mobile device management (MDM) software that dictates the requirements for the BYOD policy. MDM software helps organizations protect their data on devices that are personally owned by the employees. When employees are terminated or a device is lost, the MDM software allows a secure remote wipe of the company's data on the device. The MDM software can also set policies requiring passwords on the device. All of these requirements should be defined in the organization's BYOD policy.
Microsoft originally released Active Directory (AD) with Windows 2000 Server to compete with Novell Directory Services (NDS). Active Directory is a highly scalable directory service that can contain many different objects, including users, computers, and printers, just to name a few. Active Directory uses a protocol called Lightweight Directory Access Protocol (LDAP) to quickly look up objects. It's important to understand that Active Directory is not the authentication mechanism; it is only the directory for storing and for the lookup of objects. Active Directory works in conjunction with Kerberos, which is the protocol that performed the authentication of users.
Active Directory uses a directory partition called the schema partition to describe classes of objects and the attributes that define each object. Each attribute is defined in the schema based on the value it will contain. For example, a user class can have a first name, last name, middle initial, description, and a number of other attributes. A specific user account is then created and configured in the GUI Microsoft Management Console (MMC) called Active Directory Users and Computers. There are also several attributes that don't show up in the management tool, such as when an object was last changed or replicated, as well as many other attributes used by Active Directory for management.
A domain is a hierarchical collection of security objects, such as
users, computers, and policies, among other components. Active Directory
domains are named with a Domain Name System (DNS) name. For example, sybex.com
would be the root
domain; if you wanted to add a new domain, you would append the
namespace to the left, as follows: east.sybex.com
. Using a
DNS namespace is one of the ways that Active Directory is scalable and
hierarchical. Many organizations never need anything more than one
domain to contain all their security objects.
When a user authenticates against an Active Directory domain, a domain access token is issued, as shown in Figure 17.13. You can consider these to be keys for the various locks (ACLs) on resources. If the user is a member of a particular security group, it will be in their security token. When the user encounters a file that is secured with an ACL, the security token is presented. If there is a matching credential, then the user is granted the associated file permission on the ACL.
FIGURE 17.13 Active Directory security tokens
A domain can hold security objects, but you need to have some organization to the many different objects that you will create in your domain. Organizational units (OUs) enable you to group objects together so that you can apply a set of policies to the objects. OUs should be designed to group objects by the following criteria:
Group Policy is a feature of Active Directory that enables you to apply policies to control users and computers. Typically, you do not apply policies to individual users or computers but instead to groups of users or computers. A Group Policy Object (GPO) is a type of object in Active Directory that allows you to apply a set of policies against an organizational unit. Group Policy Objects are created, linked, and edited in the Group Policy Management Console (GPMC), as shown in Figure 17.15.
FIGURE 17.14 A hybrid OU structure
FIGURE 17.15 The Group Policy Management Console
You can control thousands of settings for both the user and computer objects, as shown in Figure 17.16. Policies are hard controls that you can force on an object. Policies are refreshed in the background every 90 minutes. So, if a setting that has a policy applied changes, it will be set back during the refresh cycle. Most of the time, however, settings are grayed out when they are being managed by GPO and cannot be changed at all. Preferences allow for files, Registry, environment variable, and Control Panel items to be modified. Preferences set an initial setting and are applied only during first login, so these settings are a preference, not a policy, and the user can change these settings afterward.
FIGURE 17.16 Group Policy Object settings
Login scripts are one of the configurable attributes for a user account. As covered in Chapter 20, “Scripting and Remote Access,” you can use VBScript or Windows batch scripts as login scripting languages. Login scripts are useful on an Active Directory network for connecting network‐mapped drives and printers, among other administrative tasks. Login scripts also provide uniformity across an enterprise by running the same commands for each user configured with the script. The location of the setting is found on the Profile tab of the user account, as shown in Figure 17.17.
FIGURE 17.17 Profile settings for a user account
A home folder is a private network location in which the user can store their personal files. The home folder is an attribute that can be set for a user account in the Active Directory Users and Computers MMC on the Profile tab, as shown in Figure 17.17. The location can be a local path, if the user will use the same computer, and the files should be stored locally for the user. However, it is most useful when you connect a network drive to a remote file server. This allows for centralized file storage, and you can then perform backups on the data.
Normally, when a user logs into the network and a roaming profile exists for the user, the profile is completely downloaded to the computer the user is working on. During logout, all data is written back to the roaming profile location on the network file server. Profiles can become extremely large in size, sometimes even gigabytes, and slow down the login and logout processes.
Folder redirection is a Group Policy setting that allows the redirection of portions of users' profile folders to a network location. When folder redirection is used, the roaming profile is still downloaded. However, the redirected folders are not downloaded; they are simply redirected to the network location. This speeds login and logout times because the entire profile is no longer downloaded (login) and uploaded (logout).
The use of Active Directory domains allows you to adopt a centralized administration model for user accounts. You also have the ability to secure files, printers, and other resources on the domain with these centralized credentials. However, if you secure the resources solely with user accounts, very quickly it will become a daunting task, as more and more people need access to resources. This is a common pitfall of new administrators, because most resources only need to be accessed by a few people initially.
All security should be done using security groups for a few reasons. The first reason is a simple one: you want to administer groups of users and not individual users. It is easier to apply permissions to a group of users than individual users. As the resource needs to be shared by more people, you never have to revisit the resource to apply the new permissions if a group is used. All you need to do is add the new users to the group and they will have access.
One additional benefit that accompanies using security groups is the centralized auditing of permissions. If you want to know who has access to a resource, all you have to do is look at the membership of the group associated with the resource. If you need to check if someone specific has permission to the resource, you just have to look at their group membership.
When securing resources, you should always create a new group that is
associated with the resource and never reuse a group for multiple
resources. For example, if you were securing a main office printer, you
should create a group that explains the resource and the level of
access, such as creating a group called
perm:print_mainprinter
. Just by looking at the group you
can identify that it is a permissions group, that it allows printing,
and that the resource is the main printer.
Malware is a broad term describing any software with malicious intent. Although we use the terms malware and virus interchangeably, distinct differences exist between them. The lines have blurred because the delivery mechanisms of malware and viruses are sometimes indistinguishable.
A virus is a specific type of malware, the purpose of which is to multiply, infect, and do harm. A virus distinguishes itself from other malware because it is self‐replicating code that often injects its payload into documents and executables. This is done in an attempt to infect more users and systems. Viruses are so efficient in replicating that their code is often programmed to deactivate after a period of time, or they are programmed to only be active in a certain region of the world.
Malware can be found in a variety of other forms, such as covert cryptomining, web search redirection, adware, spyware, and even ransomware, and these are just a few. Today the largest threat of malware is ransomware because it's lucrative for criminals.
Ransomware is a type of malware that is becoming popular because of anonymous currency, such as Bitcoin. Ransomware is software that is often delivered through an unsuspecting random download. It takes control of a system and demands that a third party be paid. The “control” can be accomplished by encrypting the hard drive, by changing user password information, or via any of a number of other creative ways. Users are usually assured that by paying the extortion amount (the ransom), they will be given the code needed to revert their systems back to normal operations. CryptoLocker was a popular ransomware that made headlines across the world (see Figure 17.18). You can protect yourself from ransomware by having antivirus/antimalware software with up‐to‐date definitions and by keeping current on patches.
FIGURE 17.18 CryptoLocker
Trojan horses are programs that enter a system or network under the guise of another program. A Trojan horse may be included as an attachment or as part of an installation program. The Trojan horse can create a backdoor or replace a valid program during installation. It then accomplishes its mission under the guise of another program. Trojan horses can be used to compromise the security of your system, and they can exist on a system for years before they're detected.
The best preventive measure for Trojan horses is to not allow them entry into your system. Immediately before and after you install a new software program or operating system, back it up! If you suspect a Trojan horse, you can reinstall the original program(s), which should delete the Trojan horse. A port scan may also reveal a Trojan horse on your system. If an application opens a TCP or UDP port that isn't supported in your network, you can track it down and determine which port is being used.
A keylogger is normally a piece of software that records an unsuspecting victim's keystrokes. Keyloggers can stay loaded in memory and wait until you log into a website or other authentication system. They will then capture and relay the information to an awaiting host on the Internet.
Keyloggers don't always have to be in the form of software. Some keyloggers are hardware dongles that sit between the keyboard and computer. These must be retrieved and the data must be downloaded manually, so they are not very common.
Rootkits are software programs that have the ability to hide certain things from the operating system. They do so by obtaining (and retaining) administrative‐level access. With a rootkit, there may be a number of processes running on a system that don't show up in Task Manager, or connections that don't appear in a Netstat display of active network connections that may be established or available. The rootkit masks the presence of these items by manipulating function calls to the operating system and filtering out information that would normally appear.
Unfortunately, many rootkits are written to get around antivirus and antispyware programs that aren't kept up‐to‐date. The best defense you have is to monitor what your system is doing and catch the rootkit in the process of installation.
Spyware differs from other malware in that it works—often actively—on behalf of a third party. Rather than self‐replicating, like viruses and worms, spyware is spread to machines by users who inadvertently ask for it. The users often don't know they have asked for it but have done so by downloading other programs, visiting infected sites, and so on.
The spyware program monitors the user's activity and responds by offering unsolicited pop‐up advertisements (sometimes known as adware), gathers information about the user to pass on to marketers, or intercepts personal data, such as credit card numbers.
With the rise of bitcoin, so came the rise of cryptominers. A cryptominer is typically a purpose‐built device that grinds out cryptographic computations. When the computation is balanced a cryptocoin is created and equates to real money, such as Bitcoin, Ethereum, and Dogecoin, just to name a few. A cryptominer does not always have to be a dedicated purpose‐built device, it can also be a distributed group of computers called a cryptopool.
Malware in the form of cryptominers became very popular, because it is a very lucrative way for threat agents to make money. The problem is that the threat agents uses your computer to grind out the computations. The most common way a threat agent will run a cryptominer remotely is with JavaScript embedded on a malicious web page. Threat agents have also been known to create viruses in which the payload (cryptominer) uses your video card to grind out the computations. However, the JavaScript variant is more common to find in the wild.
Viruses can be classified as polymorphic, stealth, retrovirus, multipartite, armored, companion, phage, and macro viruses. Each type of virus has a different attack strategy and different consequences.
The following sections introduce the symptoms of a virus infection, explain how a virus works, and describe the types of viruses you can expect to encounter and how they generally behave. We'll also discuss how a virus is transmitted through a network and look at a few hoaxes.
Many viruses will announce that you're infected as soon as they gain access to your system. They may take control of your system and flash annoying messages on your screen or destroy your hard disk. When this occurs, you'll know that you're a victim. Other viruses will cause your system to slow down, cause files to disappear from your computer, or take over your disk space.
You should look for some of the following symptoms when determining if a virus infection has occurred:
This list is by no means comprehensive. What is an absolute, however, is the fact that you should immediately quarantine the infected system. It is imperative that you do all you can to contain the virus and keep it from spreading to other systems within your network, or beyond.
A virus, in most cases, tries to accomplish one of two things: render your system inoperable or spread to other systems. Many viruses will spread to other systems given the chance and then render your system unusable. This is common with many of the newer viruses.
If your system is infected, the virus may try to attach itself to every file in your system and spread each time you send a file or document to other users. Some viruses spread by infecting files that are either transmitted through a network or by removable media, such as backup tapes, USB thumb drives, CDs, and DVDs, just to name a few. When you give removable media to another user or put it into another system, you then infect that system with the virus.
Many viruses today are spread using email. The infected system attaches a file to any email that you send to another user. The recipient opens this file, thinking it's something that you legitimately sent them. When they open the file, the virus infects the target system. The virus might then attach itself to all the emails that the newly infected system sends, which in turn infects the computers of the recipients of the emails. Figure 17.19 shows how a virus can spread from a single user to literally thousands of users in a very short period of time using email.
FIGURE 17.19 A virus spreading from an infected system using email
Viruses take many different forms. The following list briefly introduces these forms and explains how they work.
These are the most common types of viruses, but this isn't a comprehensive list:
Armored Virus An armored virus is designed to make itself difficult to detect or analyze. Armored viruses cover themselves with protective code that stops debuggers or disassemblers from examining critical elements of the virus. The virus may be written in such a way that some aspects of the programming act as a decoy to distract analysis while the actual code hides in other areas in the program.
From the perspective of the creator, the more time that it takes to deconstruct the virus, the longer it can live. The longer it can live, the more time it has to replicate and spread to as many machines as possible. The key to stopping most viruses is to identify them quickly and educate administrators about them—the very things that the armor makes difficult to accomplish.
FIGURE 17.20 A multipartite virus commencing an attack on a system
Upon infection, some viruses destroy the target system immediately. The saving grace is that the infection can be detected and corrected. Some viruses won't destroy or otherwise tamper with a system; instead, they use the victim system as a carrier. The victim system then infects servers, fileshares, and other resources with the virus. The carrier then infects the target system again. Until the carrier is identified and cleaned, the virus continues to harass systems in this network and spread.
A botnet is a group of zombies, which sounds like a ridiculous beginning to a horror movie. When malware infects a computer, its purpose is often to lie dormant and await a command from a command‐and‐control server. When this happens, the computer is considered a zombie. When enough infected computers (zombies) check in, the threat agent will send a command to the command‐and‐control server, and the botnet of zombies will work on the task. Often the task is to launch a malicious DDoS attack or to send spam. DDoS will be covered later in this chapter.
A worm is different from a virus in that it can reproduce itself, it's self‐contained, and it doesn't need a host application to be transported. Many of the so‐called viruses that make the news are actually worms. However, it's possible for a worm to contain or deliver a virus to a target system.
By their nature and origin, worms are supposed to propagate, and they use whatever services they're capable of using to do that. Early worms filled up memory and bred inside the RAM of the target computer. Worms can use TCP/IP, email, Internet services, or any number of possibilities to reach their target.
Now that you understand some of the common software threats, let's look at how you can protect yourself from them. This section discusses practical tools and methods that you can use to safeguard yourself from common software‐based threats. We will also discuss tactics you can use to mitigate risk as well as ways to recover from security mishaps.
Most malware can be simply prevented with the use of antivirus software. Back when Windows XP came out, the running joke was that you would get a virus before you could get a chance to install antivirus software. To some extent this was true, if you had to get online to retrieve the software.
Microsoft introduced Microsoft Security Essentials as a download for Windows XP, and the Windows Vista operating system started to ship with it installed. Today, Windows comes preinstalled with Windows Virus & Threat Protection, so if you don't purchase antivirus software you are still protected. As a result of these tactics, Microsoft has made the Windows operating system safer than it used to be.
Although Microsoft's antivirus program will work fine for most computing needs, there are some advantages to purchasing antivirus products from third‐party vendors. To understand some of the differences, you need to be familiar with the components of antivirus software. Antivirus software comprises two main components: the antivirus engine and the definitions database, as shown in Figure 17.21.
FIGURE 17.21 Antivirus software components
Malware is a broad term and covers many different software threats that we learned about in this chapter. Antimalware and antivirus are extremely similar in their functionality, and sometimes vendors have a hard time differentiating their products. This is because many antivirus products now check more than just files.
An antimalware software package will not only check the filesystem for threats, like rootkits and trojans, but will also watch incoming email for phishing scams and malicious websites, as shown in Figure 17.22. When these threats are detected, the user gets a notification, and the threat is usually mitigated or avoided completely.
FIGURE 17.22 Antimalware components
In Exercise 17.1 you will test your antimalware protection with a harmless file called an Eicar file.
A recovery console can perform a number of useful functions for recovery from a security threat. The Windows Recovery Environment (WinRE) is a recovery console that can perform a number of useful functions, as we'll cover in this section. The most useful function is the Reset This PC option, which allows you to refresh the operating system while keeping your data files or remove everything and start from scratch, as shown in Figure 17.23. The latter of the two options assumes you have backups of your data files.
FIGURE 17.23 Windows Recovery Environment
The Windows Recovery Environment also allows you to perform a system restore, whereby you can restore the operating system back to a specific point in time. If a system recovery image exists, you can also recover with the System Image Recovery option. This option will reset the operating system back to the point in the recovery image, which is usually just like the day you turned it on. Figure 17.24 shows the Advanced Options menu.
By far the best prevention of security threats is the education of your end users regarding common threats. For example, the most effective method of preventing viruses, malware, spyware, and harm to data is to teach your users not to open suspicious files and to open only those files that they're reasonably sure are virus/malware free. End users should also be educated on how to identify Trojans and phishing emails scams. The end‐user education should also identify guidelines for physical destruction of data, in particular any paperwork that has sensitive information on it. End users should also be educated on the various social engineering threats and how to identify them. An end user who has foresight and who exercises vigilance is more powerful than any antivirus or antimalware product on the market.
FIGURE 17.24 Windows Advanced Options
End‐user education in an organization is normally part of the employee onboarding process for new hires. However, it should not stop there, because threats change every day. Many organizations revisit the training for their employees once a year in ideal circumstances. This training can be performed in a formal classroom setting or through an online service. Some online services offer educational videos that have interactive questions to verify that the employee has learned the objectives of the video.
Because phishing is such a widespread problem for organizations, special antiphishing training is often mandatory for employees a few times a year. Often organizations will phish their employees with specially crafted emails in an attempt to see how well their training is working. When an employee spots the phishing attempt, they can earn rewards, like a gift card. However, if they get phished, then they must retake the antiphishing training or they may be targeted in the future. A popular month for these tactics is October, because it is the cybersecurity awareness month.
“Software firewalls” is a misnomer for this section, since all firewalls are software‐based in some way. Sure, you might purchase a piece of equipment that is classified as a hardware firewall, but there is software running on the firewall to protect your network. However, when we discuss firewalls in respect to operating systems, we call them software firewalls, because they are part of the operating system and thus considered software.
Let's look at where we came from and where we are now. A major gamechanger in the history of Microsoft was the release of Windows XP Service Pack 2, which switched on the built‐in firewall by default. It was long overdue for the operating system at the time. It was also not well received, because administrators had to learn firewall rules when they installed a new software package that required incoming network traffic. As a result, the firewall was the first thing that got shut off when there was a problem with connectivity.
Throughout the years of new Windows versions, the firewall has received new features and became a polished product. Windows 10 renamed the product Windows Defender Firewall and Windows Defender Firewall with Advanced Security, as shown in Figure 17.25. Both of these configure the same firewall service, with the latter of the two allowing for much more granular control. The inner workings are almost identical to those of the original firewall that shipped with Windows XP SP2.
FIGURE 17.25 Windows Defender Firewall with Advanced Security on Windows 10
The Windows firewall has achieved its original purpose of protecting the operating system from malicious worms and malicious inbound network connections. By default, the outbound network traffic is allowed and inbound network traffic is blocked, unless a rule exists (see Figure 17.26).
FIGURE 17.26 Windows Defender Firewall with Advanced Security defaults
You will also notice that there are three different profiles: Domain, Private, and Public. When the network service starts up, it contacts the default gateway (router) and configures itself to a profile. This allows the operating system to be location‐aware and protect itself differently based on your location. If the router has never been seen before, then you'll get prompted with a dialog box asking you to choose if you want to allow your PC to be discoverable, as shown in Figure 17.27. If you answer Yes, then the firewall profile will be configured as Private, and any rules associated with the Private Profile will be active. If you answer No, then any rules associated with the Public Profile will be active. The Domain Profile is automatically selected if the network is the corporate network and the operating system is joined to the domain.
FIGURE 17.27 Windows location dialog box prompt
Firewalls are also built into other operating systems, including Linux and macOS. Depending on the distribution of Linux or macOS, the firewall included will vary, as well as the way you would configure it. However, most distributions of Linux, such as Ubuntu and Debian, come with the iptables firewall installed. CentOS and Fedora come with firewalld, which also supports location‐based firewall rules.
Regardless of which type of firewall an operating system comes preinstalled with, a third‐party firewall can be installed. These firewalls can offer intrusion‐detection capabilities that alert you when someone is attacking. In almost all cases, the firewall that comes with the operating system is more than adequate, but you have to keep it on at all times to prevent unwanted connections.
When you are compromised by a virus or other type of malware, the only way to be sure you have removed it completely is to reinstall the operating system. This may seem like an extreme measure, but virus researchers do not always know what the threat agent embeds in the operating system. The threat agent's mission is to gain access to your operating system and to keep a persistent connection. If that means opening a few other backdoors, then that is what they will embed in their malware.
Fortunately, the Windows operating system makes it easy to reinstall the operating system. The Windows operating system allows you to reset the PC with the use of the recovery console or from the Settings app. Many devices also have a recovery tool embedded in the browser to factory‐reset/reimage the device.
Social engineering is a process in which an attacker attempts to acquire information about your network and system by social means, such as talking to people in the organization. A social engineering attack may occur over the phone, by email, or in person. The intent is to acquire access information, such as user IDs and passwords. When the attempt is made through email or instant messaging, it is known as phishing (discussed later), and it's often made to look as if a message is coming from sites where users are likely to have accounts. (Banks, bills, and credit cards are popular.)
These are relatively low‐tech attacks and are more akin to con jobs. Take the following example: Your help desk gets a call at 4:00 a.m. from someone purporting to be the vice president of your company. They tell the help desk personnel that they are out of town to attend a meeting, their computer just failed, and they are sitting in a FedEx office trying to get a file from their desktop computer back at the office. They can't seem to remember their password and user ID. They tell the help desk representative that they need access to the information right away or the company could lose millions of dollars. Your help desk rep knows how important this meeting is and gives the user ID and password over the phone. At this point, the attacker has just successfully socially engineered an ID and password that can be used for an attack by impersonating a high‐profile person.
Another common approach is initiated by a phone call or email from someone who pretends to be your software vendor, telling you that they have a critical fix that must be installed on your computer system. It may state that if this patch isn't installed right away, your system will crash and you'll lose all your data. For some reason, you've changed your maintenance account password, and they can't log in. Your system operator gives the password to the person. You've been hit again.
In Exercise 17.2, you'll test your users to determine the likelihood of a social engineering attack. The steps are suggestions for tests; you may need to modify them slightly to be appropriate at your workplace. Before proceeding, make certain that your manager knows that you're conducting such a test and approves of it.
Phishing is a form of social engineering in which you ask someone for a piece of information that you are missing by making it look as if it is a legitimate request. An email might look as if it is from a bank and contain some basic information, such as the user's name. These types of messages often state that there is a problem with the person's account or access privileges. The person will be told to click a link to correct the problem. After they click the link, which goes to a site other than the bank's, they are asked for their username, password, account information, and so on. The person instigating the phishing attack can then use this information to access the legitimate account.
The only preventive measure in dealing with social engineering attacks is to educate your users and staff never to give out passwords and user IDs over the phone or via email or to anyone who isn't positively verified as being who they say they are.
When phishing is combined with Voice over IP (VoIP), it becomes known as vishing, which is just an elevated form of social engineering. While crank calls have existed since the invention of the telephone, the rise in VoIP now makes it possible for someone to call you from almost anywhere in the world, without the worry of tracing, caller ID, and other features of landlines, and pretend to be someone they are not in order to get data from you.
Two other forms of phishing of which you should be aware are spear phishing and whaling, which are very similar in nature. With spear phishing, the attacker uses information that the target would be less likely to question because it appears to be coming from a trusted source. Suppose, for example, that you receive a message that appears to be from your spouse that says to click here to see that video of your children from last Christmas. Because it appears far more likely to be a legitimate message, it cuts through your standard defenses like a spear, and the likelihood that you would click this link is higher. Generating the attack requires much more work on the part of the attacker, and it often involves using information from contact lists, friend lists from social media sites, and so on.
Whaling is nothing more than phishing, or spear phishing, for so‐called “big” users—thus, the reference to the ocean's largest creatures. Instead of sending out a To Whom It May Concern message to thousands of users, the whaler identifies one person from whom they can gain all the data that they want—usually a manager or business owner—and targets the phishing campaign at them.
Another form of social engineering is known as shoulder surfing. It involves nothing more than watching someone when they enter their sensitive data. They can see you entering a password, typing in a credit card number, or entering any other pertinent information. A privacy filter can be used to block people from looking at your screen from an angle. However, privacy filters do not protect you as you are entering a password, since a shoulder surfer will watch your keystrokes. The best defense against this type of attack is to survey your environment before entering personal data. It is also proper etiquette to look away when someone is entering their password.
Tailgating is another form of social engineering, and it works because we want to be helpful. Tailgating is the act of entering a building that requires a swipe card or other authentication factor by using the person in front of you.
You may be walking toward an entry that requires some authentication, when someone walking the same way introduces themselves as new to the company and shares some stories about their first day. By the time you get to the door, you may hold it open for them and wish them luck. It can even happen without you knowing it, if the door barely closes and they grasp it. Several different tactics, such as access control vestibules and guards, can be used to mitigate this threat. The best prevention is education of your staff to make sure that it does not happen.
Impersonation is prevalent in many different social engineering attacks. The threat agent portrays (impersonates) another employee for many of the attacks to work. Most employees want to help another fellow employee. The threat agent might be impersonating Bill Jones from accounting asking IT to reset his password. Or the threat agent might impersonate the IT department calling Bill Jones and instructing him to type some well‐crafted commands into his computer. Many phishing emails also impersonate your bank, an online store, or some other reputable source in an attempt to steal your credentials. The best method for combatting impersonation is end‐user training. Training should help users identify suspicious email or phone calls.
Dumpster diving is the act of a person rifling through the trash with the expectation to find information. A strong policy to prevent dumpster diving is the physical destruction of any sensitive data. Destruction can be performed with the use of a mechanical shredder on site or a service that destroys materials off site on behalf of the organization. If an off‐site shredding service is performed, always request a signed certificate of destruction to prove that sensitive material was destroyed.
An evil twin attack is a wireless phishing attack in which the attacker sets up a wireless access point to mimic the organization's wireless access points. When a user connects to the evil twin, it allows the attacker to listen in on the user's traffic. Evil twin access points often report a stronger signal to entice the user to connect to the specific access point, as shown in Figure 17.28. The attacker will then create a connection back to the wireless network and passively sniff network traffic as it routes the traffic to the original destination. The best way to mitigate against evil twin attacks is to perform wireless site surveys on a regular basis to ensure that only valid access points are being used.
FIGURE 17.28 Evil twin attack
A threat is a potential danger to the network or the assets of the organization. The potential danger to a network or organization is the attack that a threat agent can carry out. All attacks upon an organization are either technology based or physically based. A technology‐based attack is one in which the network and operating systems are used against the organization in a negative way. Physically based attacks use human interaction or physical access, which we previously covered as social engineering attacks. We will now cover several different types of technology‐based attacks that are commonly used against networks and organizations.
A denial‐of‐service (DoS) is an attack launched to disrupt the service or services a company receives or provides via the Internet. A DoS attack is executed with an extremely large number of false requests; because of the attack, the servers will not be able to fulfill valid requests for clients and employees. There are several different types of DoS attacks:
FIGURE 17.29 An ICMP‐based smurf attack
FIGURE 17.30 An amplified attack
FIGURE 17.31 A DDoS attack
When a hole (vulnerability) is found in a web browser or other software, and attackers begin exploiting it the very day it is discovered by the developer (bypassing the one‐to‐two‐day response time that many software providers need to put out a patch once the hole has been found), it is known as a zero‐day attack (or exploit). It is very difficult to respond to a zero‐day exploit. If attackers learn of the weakness the same day as the developer, then they have the ability to exploit it until a patch is released. Often, the only thing that you as a security administrator can do, between the discovery of the exploit and the release of the patch, is to turn off the service. You can do this by isolating or disconnecting the system(s) from the network until a patch is released. Although this can be a costly undertaking in terms of productivity, it is the only way to keep the network safe.
A spoofing attack is an attempt by someone or something to masquerade as someone else. This type of attack is usually considered an access attack. A common spoofing attack that was popular for many years on early UNIX and other timesharing systems involved a programmer writing a fake login program. It would prompt the user for a user ID and password. No matter what the user typed, the program would indicate an invalid login attempt and then transfer control to the real login program. The spoofing program would write the login and password into a disk file, which was retrieved later.
The most popular spoofing attacks today are IP spoofing, ARP spoofing, and DNS spoofing. With IP spoofing, the goal is to make the data look as though it came from a trusted host when it didn't (thus spoofing the IP address of the sending host), as shown in Figure 17.32. The threat agent will forge their packet with the victim's source address.
FIGURE 17.32 IP address spoofing attack
With ARP spoofing (also known as ARP poisoning), the media access control (MAC) address of the data is faked. By faking this value, it is possible to make it look as though the data came from a networked device that it did not come from. This can be used to gain access to the network, to fool the router into sending to the device data that was intended for another host, or to launch a DoS attack. In all cases, the address being faked is an address of a legitimate user, making it possible to get around such measures as allow/deny lists.
With DNS spoofing, the DNS server is given information about a name server that it thinks is legitimate when it isn't. This can send users to a website other than the one to which they wanted to go, reroute mail, or do any other type of redirection for which data from a DNS server is used to determine a destination. Another name for this is DNS poisoning.
The important point to remember is that a spoofing attack tricks something or someone into thinking that something legitimate is occurring.
Many of the attacks we're discussing can be used in conjunction with an on‐path attack, which was previously known as a man‐in‐the‐middle (MitM) attack. For example, the evil twin attack mentioned earlier allows the attacker to position themselves between the compromised user and the destination server. The attacker can then eavesdrop on a conversation and possibly change information contained in the conversation. Conventional on‐path attacks allow the attacker to impersonate both parties involved in a network conversation. This allows the attacker to eavesdrop and manipulate the conversation without either party knowing. The attacker can then relay requests to the server as the originating host attempts to communicate on the intended path, as shown in Figure 17.33.
FIGURE 17.33 On‐path attack
Password attacks occur when an account is attacked repeatedly. This is accomplished by using applications known as password crackers, which send possible passwords to the account in a systematic manner. The attacks are initially carried out to gain passwords for an access or modification attack. There are several types of password attacks:
Insider threats are threats that originate from within your organization. Employees know the organization and can navigate the organization to get the information they need. A disgruntled employee can carry out an attack on the organization by leaking information or selling it. When information is sold to a competitor for profit, it is considered corporate espionage. The inside threat does not always need to be criminal in intent. It can also be as simple as an employee plugging an unauthorized wireless access point into the corporate network.
A Structured Query Language (SQL) injection attack occurs when a threat agent enters a series of escape codes along with a well‐crafted SQL statement into a URL. The seamlessly harmless page on the backend that is awaiting the request runs the SQL query along with its normal query. For example, a normal post URL might look like this:
http://www.wiley.com/phone.php?name=jones
The threat agent will add their SQL injection after the normal post query string, such as the following:
http://www.wiley.com/phone.php?name=jones; DROP TABLE Users
This would generate the following SQL query on the backend and send the malicious query to the SQL database:
SELECT FullName, PhoneNum
From Phones
Where FullName Like '%jones%'; DROP TABLE USERS
The first two and half lines to the semicolon are generated by the
page the query is posted to. The line basically tells the SQL database
to return the full name and phone number for anything that contains
jones. However, the threat agent appended DROP TABLE Users
to the query with a semicolon. This will delete the users table and
cause disruption. Technically the SQL injection causes a DoS attack.
Other malicious queries that are not so obvious and disruptive can be
submitted to discover information like table structure and consequently
steal data. Many retailers, banks, and online stores, just to name a
few, have fallen prey to SQL injection attacks and made front‐page
news.
The best way to combat this attack is by building input validation into the rendered page on the backend; this is also known as sanitization. This mitigation tactic is well outside the scope of the exam, but understanding the attack is the key takeaway.
Cross‐site scripting (XSS) is a tactic a threat agent uses to deliver a malicious script to the victim by embedding it into a legitimate web page. Common delivery methods for XSS are message boards, forums, or any page that allows comments to be posted. The threat agent will submit a post to these types of pages with their malicious script, such as JavaScript. When the victim browses the page, the threat agent's script will execute.
JavaScript and other scripting languages are controlled tightly by the browser, so direct access to the operating system is usually not permitted. However, the script will have access to the web page you are browsing or the cookies the actual page stores. This type of attack is common in hijacking web pages and trying to force the user into installing a piece of malware.
Exploits and vulnerabilities both have the same effect of compromising systems. Vulnerabilities are weaknesses in security for an operating system or network product. Vulnerabilities are the reason we need to constantly patch network systems. Exploits are scripts, code, applications, or techniques used in exploiting the vulnerabilities by a threat agent, as shown in Figure 17.34. In the following section we will cover the most common vulnerabilities as they pertain to the CompTIA exam.
FIGURE 17.34 Threat agents, exploits, and vulnerabilities
One of the easiest ways to make your systems vulnerable and expose them to threats is to fail to keep them compliant. As an administrator, you should always follow security regulatory standards as well as compliance standards.
One product that can keep your operating systems compliant is Microsoft Endpoint Configuration Manager (MECM). MECM allows for the publishing of a baseline for the Windows operating system. It will then monitor the baseline against the operating systems in your organization and will remediate them if they fall out of compliance.
MECM is just one of many tools that can be used for compliance, several of which are third‐party tools. Third‐party compliance solutions provide other unique benefits, such as the compliance of third‐party applications in addition to Windows.
When operating systems are installed, they are usually point‐in‐time snapshots of the current build of the operating system. From the time of the build to the time of installation, several vulnerabilities can be published for the operating system. When an OS is installed, you should patch it before placing it into service. Patches remediate the vulnerabilities found in the OS and fixed by the vendor. Updates add new features not included with the current build. However, some vendors may include vulnerability patches in updates. Network devices also have patches and updates that should be installed prior to placing them into service.
After the initial installation of the device or operating system and the initial patches and updates are installed, you are not done! Vendors continually release patches and updates to improve security and functionality, usually every month and sometimes outside of the normal release cycle. When patches are released outside of the normal release cycle, they are called out‐of‐band patches and are often in response to a critical vulnerability.
Microsoft products are patched and updated through the Windows Update functionality of the operating system. However, when an administrator is required to patch and update an entire network, Windows Server Update Services (WSUS) can be implemented. A WSUS server enables the administrator to centrally manage patches and updates. The administrator can also report on which systems still need to be patched or updated.
All operating systems have a life cycle of release, support, and eventually end of life. When most operating systems reach their end of life (EOL), the vendor stops supplying security patches. The void of the most current patches creates a giant vulnerability for the organization, since the operating system is no longer protected from the latest vulnerabilities. To combat these vulnerabilities, it is recommended that you keep the operating system current. This is accomplished with continual upgrades to the operating system as new versions are released.
Obviously, an unpatched system presents a huge vulnerability in your network, but an unprotected system is just as vulnerable. A workstation or server without antivirus protection or firewall protection poses a significant risk. A workstation or server without antivirus software can contract malware and potentially infect other computers and ultimately leak data. A missing or misconfigured firewall is equally as vulnerable. The firewall is used to prevent unauthorized connections that could exploit a vulnerability in the operating system. Firewalls are also often used in place of an undesirable patch that causes other issues. For example, turning on a firewall rule that prevents connecting to the Print Spooler service on Windows is a good protection method. It can shield you from print spooler vulnerabilities if the workstation or server is not providing print services. However, if the operating system is not protected by the firewall and an exploit is released, it is consequentially vulnerable.
Security is the biggest concern as it applies to BYOD devices. The biggest reason is that the organization has less control over BYOD devices than over devices it issues and owns. BYOD devices come with two inherent risks: data leakage and data portability. Data leakage happens when a device is lost or compromised in some way. There are tactics to mitigate this, such as full device encryption. However, the user's device is then forcefully encrypted by the organization and there could be legal ramifications. Another common tactic is to use mobile device management (MDM) software that creates a partition for company data. This would allow the company to encrypt their data and not affect user data.
Data portability means that the user can cart away organizational data when they leave. Although most of the time this is not a risk, an unscrupulous salesperson may be a big risk to the organization. A line‐of‐business (LOB) application should be selected that displays only the data on a mobile device and does not allow data storage. Another tactic is to employ MDM software that allows remote wiping of the organization's data. When an employee leaves, the wipe is executed and the organization's data is gone. This type of functionality is also useful if a device is lost, so it also mitigates the risk of data leakage.
A best practice is a technique or methodology that has largely been adopted by the professional community. A number of security best practices are techniques that can keep the organization safe and secure. Many of the best practices will mitigate some of the risk from attacks that I previously discussed. The benefits of other best practices might not be immediately apparent, but they will keep an organization safe in the long run.
To prevent the loss of data, data encryption should be considered—not that the data is really ever lost, but it's no longer within your control. Consider an example of a laptop with sensitive patient record information stored on it. If the laptop were to be stolen, there are a number of utilities that could provide unauthorized access. However, with encryption (such as BitLocker) enabled, both the operating system and the data would remain encrypted and inaccessible.
There are three concepts associated with data encryption: data in use, data in transit, and data at rest, as shown in Figure 17.35. Data in use is the concept of data that is in an inconsistent state and/or currently resident in memory. Most of the time, you don't need to be too concerned with data in memory, since that is normally a function of the operating system. However, when data is written to a temporary location, it is considered data in use and therefore should be encrypted. Data in transit is information traversing the network and should always be encrypted so that it is not intercepted. Over the past decade, just about every website and application has adopted some form of encryption, so there is no reason not to use encryption in transit.
Data at rest is a point of contention because it is believed that once the data hits the server, it's safe. However, the data is more vulnerable because it's in one spot. If a drive needed to be replaced because it went bad, outside of physical destruction there is no way to assure the data is inaccessible. When used for backup tapes, it is not only a good idea but should be a requirement.
FIGURE 17.35 Data and encryption
One of the most effective ways to keep a system safe is to employ strong passwords and educate your users about their best practices. Many password‐based systems use a one‐way hashing approach. You can't reverse the hash value in order to guess the password. This makes it impossible to reverse the hashes if the database of stored passwords is lifted (stolen) from the operating system. Because the hash is sent over the network in lieu of the actual password, the password is harder to crack.
Passwords should be as long and complex as possible. Most security
experts believe that at least 12 characters should be used—20 or more
characters if security is a real concern. If you use only the lowercase
letters of the alphabet, you have 26 characters with which to work. If
you add the numeric values 0–9, you get another 10 characters. Adding
uppercase letters, you gain an additional 26 characters. If you go one
step further by using symbol characters (such as
!"#$%&'()*+,‐./:;<=>?@[\]^_ˋ{|} and ~
, including
a blank space), you have an additional 33 more characters. You then have
a pallet of 95 characters for each position in your password. A typical
example of a complex password using all these elements might be
%s4@7dFs#D2$
. If you have a hard time coming up with a
strong password on your own, you can always use an online password
generator, such as https://passwordsgenerator.net
.
When the Password Complexity policy in Group Policy is enabled for the Windows operating system, three of the four categories—lowercase, uppercase, numbers, and symbols—must be used in your password. Windows Server and/or Active Directory can also require a minimum password size, which guarantees a secure password when coupled with complexity.
Let's look further at password complexity. If you used a 4‐character password, this would be 95 × 95 × 95 × 95, or approximately 81.5 million possibilities. A 5‐character password would give you 955, or approximately 7.7 billion possibilities. And a 10‐character password would give you 9510, or 5.9 × 1018 (a very big number) possibilities. As you can see, these numbers increase exponentially with each position added to the password. A 4‐digit password could probably be broken in a fraction of a day, whereas a 10‐digit password would take considerably longer and much more processing power.
If your password consisted of only the 26 lowercase letters, the 4‐digit password would have 264, or 456,000 combinations. A 5‐character password would have 265, or over 11 million combinations, and a 10‐character password would have 2610, or 1.4 × 1014 combinations. The number of combinations is still a big number, but it would take considerably less time to break it compared to a longer password. This is all based on the notion that a brute‐force password attack is being performed. If a dictionary attack were being performed, a 4‐ or 5‐digit lowercase password could take less than 5 minutes to crack.
Mathematical methods of encryption are primarily used in conjunction with other encryption methods as part of authenticity verification. The message and the hashed value of the message can be encrypted using other processes. In this way, you know that the message is secure and hasn't been altered.
Make absolutely certain that you require passwords for all accounts. It's such a simple thing to overlook in a small network, but it's not something a malicious user will overlook. By default, Windows will not allow an account to connect over the network if it has a blank password. It will, however, allow a person to log in locally with a blank password. There is a security option in the local Group Policy that specifies this behavior, as shown in Figure 17.36.
FIGURE 17.36 Windows security options
The operating system is not the only place where you should use a password for security. You should also use passwords on the basic input/output system (BIOS) and Unified Extensible Firmware Interface (UEFI) firmware. If a malicious user has access, they could possibly circumvent your security by booting a live operating system.
You should also change the default passwords on system accounts. There are dedicated sites on the web that document default username and password for various vendor devices. A common hacker can easily pull up these sites and find the default username and password for your wireless access point, camera system, or any other system on which you've neglected to change the default password.
Password expiration should be a consideration because passwords can be compromised as time goes on. Whether passwords are compromised by shoulder surfing or keylogging, or intercepted via the network when a user is logging in, the fact remains they are only one factor of authentication. Therefore, passwords should be set to expire on a monthly, bi‐monthly, quarterly, semi‐annual, or annual basis. The more sensitive an account is, the more frequently the password should be changed.
Windows has a default password expiration of 42 days, as shown in Figure 17.37. You should put in place a system to expire passwords on a periodic basis, as stated previously. You would then communicate this to your users via the onboarding process when they are hired.
FIGURE 17.37 Password expiration
In addition to administrator best practices, there are several different end‐user best practices that you should advocate to your users. In the following, we will cover the top end‐user best practices covered by the CompTIA exam. However, when it comes to end‐user best practices and training, these are just the tip of the iceberg.
When a user walks away from their computer and leaves themself logged in, anyone who walks up to the computer has the same level of access as the owner of the account. This type of attack requires that the threat agent be physically present. However, leaving a computer logged in also invites insider threats, unauthorized access to information, or even data loss.
Training users to lock their screen when they walk away is the best way to prevent unauthorized access. By simply pressing the Windows key and L, a user can lock their screen as they walk away.
Alternately, the administrator can require a user to use a screen saver lock. For example, the screen saver lock can be set to 15 minutes. After 15 minutes of idle time, the screen saver will turn on. The user will not be able to access the desktop until they enter their password. This setting provides two benefits: first, it provides a visual deterrent to potential threat agents, and second, it prevents threat agents from carrying out an attack.
When users are not utilizing a system, they should be encouraged to log off the system. When users remain logged in, the programs that they were running stay running as well. If there is malware on the system, it will stay running as well, potentially allowing threat agents to carry out attacks.
When a user logs off the operating system, any malware running will terminate and hopefully not launch on next login. Malware that launches on the next login is considered to be persistent. Outside of malware, if the system has a resource that is shared, then having users log off will free the resource for the next person.
The administrator has control at their disposal that allows them to police the user. After a period of time in which the system is idle, the administrator can forcibly log off the user automatically. This is usually performed on shared systems, such as a terminal server that serves applications or virtual desktop infrastructure that serves desktops.
It is our job as administrators to protect information, such as personally identifiable information (PII), as well as usernames and passwords. However, we also bestow this responsibility onto our users, since many times they have direct access to information. Users should be trained to identify PII and methods to protect such information. Examples of end‐user measures to protect sensitive information can be as simple as controlling printouts, using discretion when viewing information with others around, and destroying sensitive trash, just to name a few.
End users also have portables devices that can contain sensitive data, and these devices should be secured when not in use. Many an organization has made front‐page headlines with the loss of a simple laptop containing PII. Locks can be used to secure devices, and most portable devices contain a security slot for a lock. A laptop with sensitive data is not the only device that can be lost or stolen—portable hard drives can also contain PII and thus should be physically controlled, or their use should be prevented.
Given a security‐related scenario, account management can take into consideration such settings as restricting user permissions, setting login time restrictions, disabling the Guest account, locking an account after a certain number of failed attempts, and configuring a screen lock when the system times out after a specified length of inactivity.
When assigning user permissions, follow the principle of least privilege: give users only the bare minimum that they need to do their job. Assign permissions to groups rather than to users, and make users members of groups (or remove them from groups) as they change roles or positions.
The use of groups is crucial to account management, because when you apply permission on NTFS for the user, you need to visit the resource to identify the permissions granted. When you use a group and apply the NTFS permission to that specific group, you can now look at either the membership of the group or the membership of the user in Active Directory. This allows you to see who has access to the resource without having to visit the resource. Figure 17.38 shows an oversimplified example. A user, Fred, is a member of both the Sales and R&D groups; therefore, he has access to the Sales and R&D folders. In a real‐world application, the group would be more descriptive, such as Perm:RW_Sales_Server1. This naming would describe what the group is used for (permissions), what level of permissions (RW), what resource (Sales share), and on what server (Server1).
FIGURE 17.38 Users, groups, and resources
Configure user accounts so that logins can occur only during times that the user can be expected to be working. Preventing logins at 2:00 a.m. can be an effective method of keeping hackers from your systems. This can be performed in Active Directory by clicking the user's account, selecting the Account tab, and then clicking Logon Hours, as shown in Figure 17.39. From this interface, you can configure the permitted hours for logins and denied hours for logins.
FIGURE 17.39 Account restrictions
You can also set account expiration on the Properties tab of the user. By default, the account is set to never expire. However, you can add a date at which time the account will expire. This is best used on contractor accounts, where the terms of use can be defined. Because contractor accounts often don't go through typical human resource processes, the accounts can be forgotten about. By adding an account expiration, you can be assured that at the end of the contract the account will be disabled (expired).
Configure user account settings to limit the number of login attempts before the account is locked for a period of time. Legitimate users who need to get in before the block expires can contact the administrator and explain why they weren't able to give the right password three times in a row, and illegitimate users will go away in search of another system to try to enter.
When choosing the number of failed attempts, you need to consider the number of calls you get to the help desk versus the security in having few failed attempts before lockout. You'll find that when you set it to three failed attempts, the help desk will get more calls than necessary, but it allows for better security. Setting the number of failed login attempts to five may be better for users, because many users realize after the third failed attempt that their Caps Lock key was on, but it's less secure than three failed attempts. This setting needs to be evaluated against your security requirement and help desk volume.
You should also consider the length of the lockout. If it's a Monday morning and a person enters their password wrong X number of times and gets locked out, 5 minutes might be appropriate. The time it takes to get a cup of coffee and unlock might be just enough time on a Monday morning to allow the user to wake up. You can specify these settings for an entire domain, as shown in Figure 17.40. As shown here, the user will be locked out for 30 minutes after three failed attempts. By default, there is no account lockout policy set for a domain.
FIGURE 17.40 Account lockout policy settings
Default accounts represent a huge weakness because everyone knows they exist. When an operating system is installed—whether on a workstation or a server—certain default accounts are created. Knowing the names of those accounts simplifies the process of potential attackers accessing them because they only have to supply the password.
The default username should be changed whenever possible. Several websites are dedicated to documenting the default username and password for routers, switches, and other network equipment. These sites are useful, especially when you lose the documentation for the device and have to factory‐reset it. A login after you have successfully just factory‐reset the device is really the only time that it's acceptable to use the default username and password for security. You don't need a website to guess the common administrative accounts on equipment or an operating system. They are usually admin, administrator, root, or sysadmin. Changing the default username makes it more challenging for someone to try to guess the credentials.
Changing the default password to a complex password is also a good practice in hardening the device. However, changing the username will also ensure that a brute‐force attack cannot be performed against the default username. There are many different websites dedicated to listing the default credentials for network devices, so it doesn't take tremendous skill to obtain the default username and password of a device.
When Windows is installed, one of the default accounts it creates is Guest. This represents a weakness that can be exploited by an attacker. While the account cannot do much, it can provide initial access to a system, which the attacker can use to find another account or acquire sensitive information about the system.
You should disable all accounts that are not needed, especially the Guest account. Windows 10 disables the Guest account by default, as shown in Figure 17.41. After you disable accounts that are not needed, rename the accounts, if you can. (Microsoft won't allow you to rename some.) Finally, change the passwords from the defaults and add them to the list of passwords that routinely get changed.
You can require lengthy, complex passwords for all your users as well as lock down the operating system with passwords, but if a user goes to the restroom or on break without locking their workstation, any person wandering by can access everything the user has privileges to.
FIGURE 17.41 Disabled Guest account
A screen saver should automatically start after a short period of idle time, and a password should be required before the user can begin the session again. This method of locking the workstation adds one more level of security. A Group Policy can be put in place to turn on password‐protected screen savers. Adding a password‐protected screen saver can ensure that if a workstation is left unattended, it will lock and require a password to resume access. You can access this setting on Windows 10/11 by right‐clicking an empty portion of the Desktop and selecting Personalize ➢ Lock Screen ➢ Screen Saver Settings. Then, in the Screen Saver Settings dialog box, to manually require a password after a screen saver has activated, select the On Resume, Display Logon Screen check box, as shown in Figure 17.42.
It is never a good idea to put any media in a workstation if you do
not know where it came from or what it is. The simple reason is that
said media (CD, DVD, USB) could contain malware. This attack is commonly
referred to as a drop attack. Compounding matters, that malware
could be referenced in the autorun.inf
file, causing it to
be summoned when the media is inserted in the machine and requiring no
other action. autorun.inf
can be used to start an
executable, access a website, or do any of a large number of different
tasks. The best way to prevent a user from falling victim to such a ploy
is to disable the AutoRun feature on the workstation.
FIGURE 17.42 The Screen Saver Settings dialog box
Microsoft has changed the function on Windows
so that it no longer acts as it previously did (prior to Windows 7). The
feature is now disabled by default. The reason Microsoft changed the
default action can be summed up in a single word: security. That
text‐based autorun.inf
file can not only take your browser
to a web page, it can also call any executable file, pass along variable
information about the user, or do just about anything else imaginable.
Simply put, it is never a good idea to plug any media into your system
if you have no idea where it came from or what it holds. Such an action
opens up the user's system—and the network—to any number of possible
risks. An entire business's data could be jeopardized by someone with
elevated privileges inadvertently putting a harmful CD into a computer
at work.
The AutoRun feature is disabled by default, so that malicious
software does not start automatically. However, the ability to
automatically start a video, music, or open a folder when removable
media is inserted into the computer is really useful. The functionality
of automatically performing an action or asking what should be done when
media is inserted is still enabled by default on Windows, via a feature
called AutoPlay. AutoPlay distinguishes itself from AutoRun,
since it does not look at the autorun.inf
file and does not
start an executable unless the user specifically clicks an option that
they want to do so.
Think of all the sensitive data written to a hard drive. The drive can contain information about students, clients, users—about anyone and anything. The hard drive can be in a desktop PC, in a laptop, or even in a printer. Many laser printers above consumer grade offer the ability to add a hard drive to store print jobs. If the drive falls into the wrong hands, you can not only lose valuable data but also risk a lawsuit for not properly protecting privacy. An appropriate data destruction/disposal plan should be in place to avoid any potential problems.
Since data on media holds great value and liability, that media should never simply be tossed away for prying eyes to stumble on. For the purpose of this objective, the media in question is hard drives, and there are three key concepts to understand with regard to them: formatting, sanitation, and destruction. Formatting prepares the drive to hold new information (which can include copying over data already there). Sanitation involves wiping the data off the drive, whereas destruction renders the drive no longer usable.
For exam purposes, the best practices for recycling or repurposing fall into the categories of low‐level formats (as opposed to standard formatting), overwrites, and drive wipes.
There are multiple levels of formatting that can be done on a drive.
A standard format, accomplished using the operating system's
format
utility (or similar), can mark space occupied by
files as available for new files without truly deleting what was there.
Such erasing—if you want to call it that—doesn't guarantee that the
information isn't still on the disk and recoverable.
A low‐level format (typically accomplished only in the factory) can be performed on the system, or a utility can be used to completely wipe the disk clean. This process helps to ensure that information doesn't fall into the wrong hands.
The manufacturer performs a low‐level format on integrated device electronics (IDE) hard drives. Low‐level formatting must be performed even before a drive can be partitioned. In low‐level formatting, the drive controller chip and the drive meet for the very first time and learn to work together. Because controllers are integrated into SATA and IDE drives, low‐level formatting is a factory process. Low‐level formatting is not operating system–dependent.
The main thing to remember for the exam is that most forms of formatting included with the operating system do not actually erase the data completely. Formatting the drive and then disposing of it has caused many companies problems when individuals who never should have seen it retrieve the data using applications that are commercially available.
A number of vendors offer hard drives with Advanced Encryption Standard (AES) cryptography built in. However, it's still better to keep these secure hard drives completely out of the hands of others than to trust their internal security mechanisms once their usable life span has passed for the client. Some vendors include freeware utilities to erase the hard drive. If it is a Serial ATA (SATA) drive, you can always run HDDErase, but you are still taking your chances.
Solid‐state drives (SSDs) pose a greater problem since the media is flash memory and not mechanical, like conventional hard disk drives (HDDs). Low‐level formats can be performed, as mentioned in the preceding section, but the 1s and 0s will still be technically on the flash memory. Therefore, many vendors have a sanitization utility for scrubbing information from SSDs. It is best to check with the vendor, as these tools are specific to the vendor and model of SSD.
Overwriting the drive entails copying over the data with new data. A common practice is to replace the data with 0s. A number of applications allow you to recover what was there prior to the last write operation, and for that reason, most overwrite software will write the same sequence and save it multiple times.
DBAN is a utility that comes with its own boot
disk from https://dban.org
.
You can find a number of other software “shredders” by doing a quick web
search.
If it's possible to verify beyond a reasonable doubt that a piece of hardware that's no longer being used doesn't contain any data of a sensitive or proprietary nature, then that hardware can be recycled (sold to employees, sold to a third party, donated to a school, and so on). That level of assurance can come from wiping a hard drive or using specialized utilities.
If you can't be assured that the hardware in question doesn't contain important data, then the hardware should be destroyed. You cannot, and should not, take a risk that the data your company depends on could fall into the wrong hands.
Physically destroying the drive involves rendering it no longer usable. While the focus is on hard drives, you can also physically destroy other forms of media, such as flash drives and CD/DVDs.
Many commercial paper shredders are also capable of destroying DVDs and CDs. Paper shredders, however, are not able to handle hard drives; you need a shredder created for just such a purpose. A low‐volume hard drive shredder that will destroy eight drives per minute can carry a suggested list price of around $20,000.
If you don't have the budget for a hard drive shredder, you can accomplish similar results in a much more time‐consuming way with a power drill. The goal is to physically destroy the platters in the drive. Start the process by removing the cover from the drive—this is normally done with a Torx driver. (Although #8 does not work with all drives, it is a good one to try first.) You can remove the arm with a slotted screwdriver and then the cover over the platters using a Torx driver. Don't worry about damaging or scratching anything—nothing is intended to be saved. Everything but the platters can be tossed away.
As an optional step, you can completely remove the tracks using a belt sander, grinder, or palm sander. The goal is to turn the shiny surface into fine powder. Again, this step is optional, but it adds one more layer of assurance that nothing usable remains. Always wear eye protection and be careful not to breathe in any of the fine particles that are generated during the grinding/destruction process.
Following this, use the power drill to create as small a set of particles as possible. A drill press works much better for this task than trying to hold the drive and drill it with a handheld model.
A large electromagnet can be used to destroy any magnetic media, such as a hard drive or backup tape set. The most common of these is the degaussing tool. Degaussing involves applying a strong magnetic field to initialize the media. (This is also referred to as disk wiping.) This process helps ensure that information doesn't fall into the wrong hands.
Degaussing involves using a specifically designed electromagnet to eliminate all data on the drive, including the factory‐prerecorded servo tracks. You can find wand model degaussers priced at just over $500 or desktop units that sell for up to $30,000.
A form of destruction not to be overlooked is fire. It is possible to destroy most devices by burning them up, using an accelerant such as gasoline or lighter fluid to aid the process.
A certificate of destruction (or certificate of recycling) may be required for audit purposes. Such a certificate, usually issued by the organization carrying out the destruction, is intended to verify that the asset was properly destroyed and usually includes serial numbers, type of destruction done, and so on.
The type and amount of information that can be gleaned from physical documents is amazing, even in the age when there is such a push to go paperless. Dumpster diving is a common problem that puts systems at risk. Companies normally generate a huge amount of paper, most of which eventually winds up in dumpsters or recycle bins. Dumpsters can contain highly sensitive information (such as a password a user has written on a piece of paper because they haven't memorized it yet).
In high‐security and government environments, sensitive papers should either be shredded or burned. Most businesses don't do this. In addition, the advent of “green” companies has created an increase in the amount of recycled paper, which can often contain all kinds of juicy information about a company and its individual employees.
In this chapter, you learned about the various issues related to security that appear on the A+ 220–1102 exam. Security is a popular topic in computing, and the ways in which a troublemaker can cause harm increase regularly. CompTIA expects everyone who is A+ certified to understand the basic principles of security and be familiar with solutions that exist.
You also learned of security problem areas and issues that can be easily identified. Problem areas include viruses, Trojans, worms, and malware. Security solutions include implementing encryption technology, using authentication, implementing firewalls, and incorporating security at many levels.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance‐based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
Calculate the complexity of a simple 8‐character alphanumeric password versus a 25‐character alphanumeric password with symbols.
THE FOLLOWING COMPTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
This chapter is the second of two chapters that focus primarily on security. Chapter 17, “Security Concepts,” covered myriad security concepts, ranging from physical security to the proper destruction of data storage devices in your organization. In this chapter, we will focus on operating system security and mobile security.
Many organizations are adopting a cloud‐first initiative for their line‐of‐business applications. This has further perpetuated the adoption of mobile devices in the workplace. These initiatives, along with the rapid adoption of mobile devices in our personal lives, has created a tremendous need for security. This chapter will address the concerns of operating system security, mobile device security, and best practices.
Every operating system offers security features and settings. While you need to know a little about Linux and macOS, the A+ exams focus primarily on Windows and the OS‐specific security settings that you need to know to secure them. The following sections will explore some basic Windows OS security features and settings in more detail.
A number of groups are created on the operating system by default. The following sections look at the main ones.
During the initial setup of Windows, Microsoft urges the user to log in with a Microsoft account, as shown in Figure 18.1. You can set up a Microsoft account with an email, a phone, or a Skype login, which is actually a Microsoft account. This feature was originally introduced with Windows 8.
Setting up your initial user account is a smart idea, because it allows you to set up tools, such as OneDrive, to back up your files. The most significant feature is it allows setup of synchronization for settings across all of your devices. However, it also allows access to other Microsoft productivity tools and ultimately it is how you access the Microsoft Store and the Store App ecosystem.
FIGURE 18.1 Microsoft account screen
You can also choose to select an offline account, as shown on the lower left of Figure 18.1. When you use an offline account, many of the features we described earlier need to be set up manually. Also, depending on the feature, it may not work as designed. When you elect to create an offline account, you are actually electing to use local accounts on the operating system. Many corporate‐owned devices still use local accounts and connect to a traditional Active Directory (AD) domain.
Choosing to sign in with a Microsoft account or local (offline) account depends on what you are trying to achieve. If the device will be used for personal work, then a Microsoft account is the best option. If the device will be used for an organization, then a local account might be the best option. A third option exists that is reserved for organizations, which is the use of a corporately owned email address for enrollment into mobile device management (MDM) software, such as Intune.
In the following we will cover the basic local accounts and local permissions that you should expect to see on the CompTIA exam.
The Administrator account is the most powerful of all: it has the power to do everything from the smallest task all the way to removing the operating system. Because of the great power the Administrator account holds, and the fact that it is always created, many who want to do harm target this account as the one that they try to breach. To increase security, during the installation of the Windows operating systems in question, you are prompted for the name of a user who will be designated as the Administrator. The power then comes not from being called “Administrator” (the username might now be “buhagiar,” “jbuhagiar,” or something similar) but from being a member of the Administrators group. (Notice the plural for the group and singular for the user.)
Since members of the Administrators group have such power, they can inadvertently do harm (such as accidentally deleting a file that a regular user could not). To protect against this, the practice of logging in with an Administrators group account for daily interaction is strongly discouraged. Instead, we suggest that system administrators log in with a user account (lesser privileges) and change to the Administrators group account (elevated privileges) only when necessary.
Originally, Microsoft wanted to create a group in Windows whose members were not as powerful as members of the Administrators group, so they created the Power Users group. The idea was that members of this group would be given Read and Write permission to the system, allowing them to install most software but keeping them from changing key operating system files. As such, it would be a good group for those who need to test software (such as programmers) and junior administrators.
The group did not work out as planned, and in Windows 7, Windows 8/8.1, and Windows 10/11 the group has no more permissions than a standard user. The group is now kept around only for backward compatibility with Windows XP systems.
The Guest user account is created by default (and should be disabled) and is a member of the Guests group. For the most part, members of the Guests group have the same rights as a standard user, except they can't get to log files. The best reason to make users members of the Guests group is that they can access the system only for a limited time.
By default, standard users belong to the local Users group. Members of this group have Read and Write permission to their own profile. They cannot modify systemwide Registry settings or do much harm outside of their own accounts. Under the principle of least privilege, users should be made members of the Users group only, unless qualifying circumstances force them to have higher privileges.
If you attempt to run some utilities (such as
sfc.exe
) from a standard command prompt, you will be told
that you must be an administrator running a console session in order to
continue. If your account is in the Administrators group, then the
command prompt must be launched with the elevated permissions of the
administrator. To do so, choose Start ➢ All Programs ➢ Accessories, and
then right‐click Command Prompt and choose Run As Administrator. The
User Account Control (UAC) will prompt you to continue, and then you can
run sfc.exe
without a problem.
You can change between account types of standard and administrator for your local user account or for other local user accounts by using the legacy Control Panel applet for User Accounts. You simply open Control Panel, select User Accounts, then click Change Your Account Type to change your account type or click Manage Another Account, select the user, then click Change The Account Type. You will be presented with the dialog box in Figure 18.2, where you can change the type of account from Administrator to Standard using the radio buttons.
FIGURE 18.2 Changing the account type
Users can log into the local operating system with their username and password, if they have an account, and they will receive a local access token. The access token the user is granted is locally significant for the operating system. For example, an administrator (local) who authenticates against the operating system is only an administrator of that operating system and has no further network permissions. Every Windows operating system has a local database and authentication system called the Security Account Manager (SAM), as shown in Figure 18.3.
FIGURE 18.3 Windows authentication
Active Directory simplifies the sign‐on process for users and lowers the support requirements for administrators. Access can be established through groups and enforced through group memberships: all users log into the Windows domain using their centrally created Active Directory account. It's important to enforce password changes and make certain that passwords are updated throughout the organization on a frequent basis.
Active Directory uses Kerberos v5. A server that runs Active Directory retains information about all access rights for all users and groups in the network. When a user logs into Active Directory, they are granted a network access token, also called a Kerberos token. This token can be used to authenticate against other servers and workstations in the domain and is accepted network (domain) wide. This token is also referred to as the user's globally unique identifier (GUID). Applications that support Active Directory for authentication can use this GUID to provide access control.
One of the big problems larger networks must deal with is the need for users to access multiple systems or applications. This may require users to remember multiple accounts and passwords. An alternative to this is that the application must support Active Directory authentication, but that creates other considerations.
The purpose of single sign‐on (SSO) is to give users access to all the applications and systems that they need when they log in. Single sign‐on is often used with cloud‐based resources. The principle behind SSO is that the resource will trust that the user has already been authenticated. The authentication server performs this by sending a claim on behalf of the user, as shown in Figure 18.4. This claim can contain any number of Active Directory user attributes, such as first name, last name, email, and username, just to name a few. It is important to understand that at no time during the authentication process are the username and password sent to the resource that is requesting authentication. The resource must trust that the user has already been authenticated and accept the claim at face value.
FIGURE 18.4 Claims‐based authentication
Single sign‐on is both a blessing and a curse. It's a blessing in that once users have been authenticated, they can access all the resources on the network and browse multiple folders. Single sign‐on is a curse in that it removes the doors that otherwise exist between the user and various resources.
The problem with traditional usernames and passwords is that they are too complex for the user to consistently type. As administrators we want to make sure that users lock their workstation and type their password when they return to gain access. However, this is a burden on the user, especially if we are constantly changing passwords and making them more complex.
Windows Hello addresses these problems by storing the user's credentials in a secure container called the Credential Manager. The Credential Manager is then locked and unlocked with the authentication of biometrics or a PIN by the user. Once the Credential Manager is unlocked, the credentials can be passed to the operating system to provide the login credentials. Windows Hello can be configured by navigating to Start ➢ Settings App ➢ Accounts ➢ Sign‐in Options, and you will be brought to the various ways to configure Windows Hello, as shown in Figure 18.5.
FIGURE 18.5 Windows Hello Sign‐in options
There are several different sign‐in options for Windows Hello that will allow for easier login of the device. The available options will depend on your hardware connected to the device. As an example, if you want to use your fingerprint, you will need a fingerprint reader. There are several options as shown in Figure 18.5, but they all basically fall into a few categories:
In addition to local logins, Windows Hello can be used to authenticate users for a Microsoft account, Active Directory account, and Azure AD account. It can even be used to authenticate users for identity provider services, which is another way of addressing SSO.
Administrators have rights in the operating system that allows them to change the operating system. Standard users do not have these rights and can only make changes to their environment. Because the administrative rights can cause harm to the operating system or introduce security issues, care should be taken in granting your end users administrative permissions. You should exercise the principle of least privilege with granting end‐user rights. This means that unless someone needs administrative privileges, they should always log in with a standard user account. This protects the local operating system and potentially the network (if domain authenticated) from the security threats covered in Chapter 17.
User Account Control (UAC) is a feature that was introduced in Windows Vista. It supports the principle of least privilege by logging an administrator in with minimal permissions. This is extremely handy for users who occasionally need administrative rights, such as your home operating system. For example, if extra privileges were required to modify the operating system, a prompt asking if the user wants to continue would be displayed, as shown in Figure 18.6. If the user answers Yes, then the user will receive the administrative token to complete the task. UAC allows the user to run as a standard user with the ability to escalate privileges.
In Exercise 18.1 you will examine the security token for your user account. The exercise assumes that you are the administrator of the operating system.
FIGURE 18.6 UAC prompt
The New Technology File System (NTFS) was introduced with Windows NT to address security problems. Before Windows NT was released, it had become apparent to Microsoft that a new filesystem was needed to handle growing disk sizes, security concerns, and the need for more stability. NTFS was created to address those issues.
Although the File Allocation Table (FAT) filesystem was relatively stable if the systems that were controlling it kept running, it didn't do well when the power went out or the system crashed unexpectedly. One of the benefits of NTFS was a transaction‐tracking system, which made it possible for Windows NT to back out of any disk operations that were in progress when it crashed or lost power.
With NTFS, files, folders, and volumes can each have their own security. NTFS's security is flexible and built in. Not only does NTFS track security in ACLs, which can hold permissions for local users and groups, but each entry in the ACL can specify which type of access is given—such as Read & Execute, List Folder Contents, or Full Control. This allows a great deal of flexibility in setting up a network. In addition, special file‐encryption programs were developed to encrypt data while it is stored on the hard disk.
Microsoft strongly recommends that all network shares be established using NTFS. Several current operating systems from Microsoft support both FAT32 and NTFS. It's possible to convert from FAT32 to NTFS without losing data, but you can't do the operation in reverse. (You would need to reformat the drive and install the data again from a backup tape.)
Share permissions apply only when a user is accessing a file or folder through the network, as shown in Figure 18.7. NTFS permissions and attributes are used to protect the file when the user is local. With FAT and FAT32, you do not have the ability to assign “extended” or “extensible” permissions, and the user sitting at the console effectively is the owner of all resources on the system. As such, they can add, change, and delete any data or file.
With NTFS as the filesystem, you are allowed to assign more comprehensive security to your computer system, as shown in Table 18.1. NTFS permissions can protect you at the file level. Share permissions can be applied to the folder level only, as shown in Table 18.2. NTFS permissions can affect users accessing files and folders across a network or logged in locally to the system where the NTFS permissions are applied. Share permissions are in effect only when the user connects to the resource through the network.
FIGURE 18.7 Network share permissions and NTFS permissions
NTFS permission | Meaning | Object used on |
---|---|---|
Full Control | Gives the user all the other choices and the ability to change permissions. The user can also take ownership of the folder or any of its contents. | Folder and file objects |
Modify | Combines the Read & Execute permission with the Write permission and further allows the user to delete everything, including the folder. | Folder and file objects |
Read & Execute | Combines the Read permission with the List Folder Contents permission and adds the ability to run executables. | Folder and file objects |
List Folder Contents | The List Folder Contents permission (known simply as List in previous versions) allows the user to view the contents of a folder and to navigate to its subdirectories. It does not grant the user access to the files in these directories unless that is specified in file permissions. | Folder objects |
Read | Allows the user to navigate the entire folder structure, view the contents of the folder, view the contents of any files in the folder, and see ownership and attributes. | Folder and file objects |
Write | Allows the user to create new entities within a folder. | Folder and file objects |
TABLE 18.1 NTFS permissions
Share permission | Meaning |
---|---|
Full Control | Gives the user all the other permissions as well as permission to take ownership and change permissions. |
Change | Allows the user to overwrite, delete, and read files and folders. |
Read | Allows the user to view the contents of the file and to see ownership and attributes. |
TABLE 18.2 Share permissions
Within NTFS, permissions for objects fall into one of three categories: Allow, Deny, or not configured. When viewing the permissions for a file or folder, you can check the box for Allow, which effectively allows the group selected to perform that action. You can also uncheck the box for Allow, which does not allow that group that action, as shown in Figure 18.8. Alternatively, you can check the Deny box, which prevents that group from using that action. There is a difference between not allowing (a cleared check box) and Deny (which specifically prohibits), and you tend not to see Deny used often. Deny, when used, trumps other permissions.
FIGURE 18.8 NTFS folder permissions
Permissions set on a folder are inherited down through subfolders unless otherwise changed. Permissions are also cumulative; if a user is a member of a group that has Read permission and a member of a group that has Write permission, they effectively have both Read and Write permissions.
When a user accesses a file share, both the share permissions and NTFS permissions interact with each other to form the effective permission for the user. Figure 18.9 shows that a user named Fred has logged in and received his access token containing the Sales and R&D groups, since he is a member of both groups. When Fred accesses the Sales file share, the share permissions define that he has read‐only access because he is part of the Sales group. You can see that the NTFS permissions are granting him read and write access because of his Sales group membership, as well as full control because he is also in the R&D group. If Fred were to locally log in to this computer, he would effectively have full control of these files. However, because he is accessing these files from the network, he only has read‐only access because of the file‐share permissions. The opposite is also true: if he had full permission at the share level and read‐only permission at the NTFS level, he would effectively have read‐only access.
FIGURE 18.9 Effective permissions
The rule for figuring out effective permissions is simple: if a user is in more than one group for which there are multiple permissions, take the most permissive permission of NTFS and then the most permissive permission of the share; the effective permission is the more restrictive of the two. There are some circumstances that change this rule slightly when the user (or group) is denied. If a user is in any group that is denied permission at the share or the NTFS level, they are denied for that access level. Therefore, when you derive the more restrictive permission, it will always be a deny for the user. A simple way to remember this is that a deny is a deny.
When you copy a file, you create a new entity. When you move a file, you simply relocate it and still have but one entity. This distinction is important when it comes to understanding permissions. A copy of a file will have the permissions assigned to it that are already in place at the new location of the file, regardless of which permissions were on the original file.
A moved file, on the other hand, will attempt to keep the same permissions as it had in the original location. Differences will occur if the same permissions cannot exist in the new location. For example, if you are moving a file from an NTFS volume to FAT32, the NTFS permissions will be lost. If, on the other hand, you are moving from a FAT32 volume to an NTFS volume, new permissions will be added that match those for newly created entities.
Since the introduction of FAT, file and folder attributes have existed in the filesystem. The basic set of attributes—Read‐only, Hidden, System, and Archive—can still be found in both FAT and NTFS and are very useful for the operating system, as explained in the following list. You can view the file or folder attributes by right‐clicking the file or folder and selecting Properties. From there, you can view the Read‐only and Hidden attributes.
dir
command. The files still function and are still a part
of the filesystem, but they just don't appear by default. The Hidden
attribute is useful when you want to hide a file or folder from the
average user.Windows uses NTFS, which gives you a number of options that are not available on earlier filesystems, such as FAT and FAT32. A number of these options are implemented through the use of the Advanced Attributes dialog box, as shown in Figure 18.10.
FIGURE 18.10 The Advanced Attributes dialog box
To reach these options in Windows, right‐click the folder or file that you want to modify, and then select Properties from the menu. On the main Properties page of the folder or file, click the Advanced button in the lower‐right corner. In the Advanced Attributes window, you have access to the following settings:
In Exercise 18.2 you will have the opportunity to view file permissions for both basic and advanced permissions. Make sure that you don't inadvertently add any deny permissions, as you could be prevented from making any further changes.
You can share folders, and the files beneath them, by right‐clicking the file or folder and choosing Give Access To from the context menu and selecting Specific People. In Windows, the context menu asks you to choose with whom you want to share the folder or file, as shown in Figure 18.10. You can then choose who to share it with along with their respective permissions. It is important to understand that when you use this method to share files and folders, the share permissions are set to Full Control for the Everyone group. The dialog box shown in Figure 18.11 will allow you to manipulate the NTFS permissions via the permission level.
You can access the Advanced Sharing settings by right‐clicking the folder you want to share, selecting Properties, then clicking the Sharing tab, and finally selecting Advanced Sharing, as shown in Figure 18.12. This file‐sharing method is more traditional with network administrators because every aspect of the share can be controlled. Using this method, only the share permissions are set from this dialog box. The NTFS security permissions are set on the Security tab. In addition, you can add other share names to the same location, limit the number of simultaneous connections, and add comments.
Administrative shares are automatically created on all
Windows operating systems on the network for administrative purposes.
These shares can differ slightly based on which operating system is
running, but they always end with a dollar sign ($
) to make
them hidden. There is one share for each volume on a hard drive
(c$
, d$
, and so on) as well as
admin$
(the root folder—usually C:\WINDOWS
)
and print$
(where the print drivers are located). These
shares are created for use by administrators and usually require
administrator privileges to access.
FIGURE 18.11 Choose People To Share With
FIGURE 18.12 Advanced file and folder sharing
Local shares, as the name implies, are shares that are created locally by the administrative user on the operating system. The term local shares is used to distinguish between automated administrative shares and manually created shares.
Inheritance is the default throughout the permission structure unless a specific setting is created to override it. A user who has Read and Write permissions in one folder will have them in all the subfolders unless a change has been made specifically to one of the subfolders. If a user has the Write permission, which is inherited from the folder above, the removal of permissions cannot be performed unless inheritance is disabled. Only additional permissions can be added explicitly, since this is actually a new permissions entry and not the removal of an existing permissions entry. You can control NTFS inheritance by right‐clicking a folder, selecting Properties, then choosing the Security tab, selecting Advanced, and clicking the Disable Inheritance button, as shown in Figure 18.13.
FIGURE 18.13 Disabling inheritance
If the Disable Inheritance button is selected, you will be changing the NTFS security settings on the folder. A second dialog box will pop up and you must choose to either keep the existing permissions, by selecting Convert Inherited Permissions To Explicit Permissions On This Object, or to start fresh with no permissions by selecting Remove All Inherited Permissions From This Object. You should use caution when selecting the latter of the two options, because all folders and files below will inherit and propagate the removal of existing permissions.
If you want to make sure that inheritance and permissions for a folder are propagated to all files and folders below, you can use Replace All Child Object Permission Entries With Inheritable Permission Entries From This Object (refer to Figure 18.13). This option will replace every permission in this folder and all the subfolders, regardless of whether explicit permissions were applied further down in the folder structure.
In the Advanced Security Settings, you can also configure permissions entries that only apply to the current folder, current folder and files, all folders and files, or other variations of these, as shown in Figure 18.14. These settings can change the propagation of file permissions to folders and files.
FIGURE 18.14 Permission entry
System files are usually flagged with the Hidden attribute, meaning they don't appear when a user displays a folder listing. You should not change this attribute on a system file unless absolutely necessary. System files are required in order for the operating system to function. If they are visible, users might delete them (perhaps thinking that they can clear some disk space by deleting files that they don't recognize). Needless to say, that would be a bad thing! Most system files and folders are protected by the operating system and won't allow deletion, but better safe than sorry.
File attributes determine what specific users can do to files or folders. For example, if a file or folder is flagged with the Read‐only attribute, then users can read the file or folder but cannot make changes to it or delete it. Attributes include Read‐only, Hidden, System, and Archive, as well as Compression, Indexing, and Encryption. Not all attributes are available with all versions of Windows. We'll look at this subject in more detail in a moment.
You can view and change file attributes either by entering
attrib
at the command
prompt or by changing the properties of a file or folder. To access the
properties of a file or folder, right‐click the file or folder and
select Properties. You can view and configure the Read‐only and Hidden
file attributes on the General tab. To view and configure additional
attributes, click Advanced, as shown in Figure
18.15.
FIGURE 18.15 Windows file attributes
Although everything we've covered so far is a Windows security feature, we have focused on the basic elements of authentication and securing resources. In the following section we will cover the common Windows security features that can protect us from external threats, such as malware, malicious network activity, and data loss.
Microsoft Defender, also known as Windows Defender Antivirus, was originally introduced with Windows XP as a downloadable antivirus product. It was later shipped with the Vista operating system, and it has become a pillar of security in the Windows 10/11 operating system. The addition of Microsoft Defender allows the Windows operating system to be protected right out of the box. A user never needs to install anything else to be protected from malware. That being said, there are lots of products on the market that provide security features above and beyond Microsoft Defender.
FIGURE 18.16 Microsoft Defender settings
You can view the Microsoft Defender settings by navigating to Start ➢ Settings App ➢ Update & Security ➢ Windows Security ➢ Virus & Threat Protection in Windows 10. In Windows 11, you can view the settings by navigating to Start ➢ Settings App ➢ Privacy & Security ➢ Windows Security ➢ Virus & Threat Protection, as shown in Figure 18.16.
On the Virus & Threat Protection screen you can make a scan of the computer. In addition, the screen will detail how many threats have been found, when it was last scanned, how many files were scanned, and how long the scan lasted. By clicking Scan Options you can select from Quick Scan, Full Scan, Custom Scan, and Microsoft Defender Offline Scan.
You can also change the Microsoft Defender Virus & Threat Protection Settings by clicking Manage Settings on the Virus & Threat Protection screen. This will allow you to manage a number of settings to change the way Microsoft Defender operates, as shown in Figure 18.17.
FIGURE 18.17 Microsoft Defender Virus & Threat settings
You can toggle off real‐time protection when installing certain applications that require that antivirus be off during installation. However, the real‐time protection will turn back on automatically after a period of time. You can also toggle Cloud‐Delivered Protection, which provides cloud‐based data on threats and ultimately faster protection. Turning this setting off might be required for certain regulatory requirements, since it automatically turns on cloud‐based sample submission. Automatic Sample Submission can be controlled separately as well and toggled on and off. The Tamper Protection security setting prevents malicious applications from tampering with Microsoft Defender settings. Tamper Protection protects against tampering from third‐party processes; even Group Policy settings cannot disable Microsoft Defender when Tamper Protection is turned on.
Windows Defender Firewall is an advanced host‐based firewall that was first introduced with Windows XP Service Pack 2. It was integrated and became a security feature with the introduction of Windows Vista. While host‐based firewalls are not as secure as other types of firewalls, Windows Defender Firewall provides much better protection than in previous versions of Windows, and it is turned on by default. Windows Defender Firewall is used to block access from the network, which significantly reduces the surface area of attack for the Windows operating system.
To access Windows Defender Firewall in Windows 10, navigate to Start ➢ Settings Apps ➢ Update & Security ➢ Windows Security ➢ Firewall & Network Protection. To access Windows Defender Firewall in Windows 11, navigate to Start ➢ Settings Apps ➢ Privacy & Security ➢ Windows Security ➢ Firewall & Network Protection. Windows Defender Firewall is divided into separate profile settings: for domain networks (if you're connected to a domain), private networks, and public networks. In Figure 18.18, you can see the default protection for a Windows client that is not joined to a domain and is active on a public network.
FIGURE 18.18 Firewall & Network Protection
FIGURE 18.19 Deactivating Windows Defender Firewall network protection
FIGURE 18.20 Windows Defender Firewall Allowed Apps
FIGURE 18.21 Windows Firewall with Advanced Security
FIGURE 18.22 Windows Defender Firewall with Advanced Security inbound rules
You have to be careful, because CompTIA sometimes refers to the utility as “bit‐locker” or “Bitlocker,” while it is officially known as BitLocker. This tool allows you to use drive encryption to protect files—including those needed for startup and login. This is available only with more complete editions of Windows 10/11 (Pro, Enterprise, Education, Pro for Workstations), Windows 8/8.1 (Pro and Enterprise), and Windows 7 (Enterprise and Ultimate).
Another requirement is the use of a Trusted Platform Module (TPM). The TPM is a chip on the motherboard that safely stores the encryption key so that the key is not stored on the encrypted disk. BitLocker can mitigate the risk of data loss, because if the disk is separated from the computer, it is still encrypted and only the TPM with the encryption/decrypting key can decrypt the disk. This prevents out‐of‐band attacks on the hard drive, where it would be mounted and examined on a second system. BitLocker can also sense tampering. If it senses tampering, the recovery key must be reentered. The recovery key is either entered from a printout, loaded from a USB drive in which it was originally saved, or recovered from your Microsoft account. An option of how the recovery key is stored is presented to you when you initially turn on BitLocker.
You can also protect removable drives with BitLocker to Go. It provides the same encryption technology BitLocker uses to help prevent unauthorized access to the files stored on them. You can turn on BitLocker to Go by inserting a USB drive into the computer and opening the BitLocker Drive Encryption Control Panel applet, as shown in Figure 18.23. When a USB drive is inserted into a Windows computer that contains BitLocker to Go encryption, the operating system prompts you for the password to unlock the drive. This password is the one you used originally when you set up BitLocker to Go on the USB drive.
FIGURE 18.23 BitLocker Drive Encryption applet
Encrypting File System (EFS), available in most editions of Windows, allows for the encryption/decryption of files stored in NTFS volumes. EFS uses certificates to encrypt the data, and the private certificate is stored in the user profile. When the first file is encrypted, the operating system automatically generates a key pair. If the computer were joined to an Active Directory domain and a certificate authority (CA) existed, the CA would create the key pair. You can encrypt a file or folder by right‐clicking the object, selecting Properties, then Advanced, as shown in Figure 18.24.
All users can use EFS, whereas only administrators can turn on BitLocker. EFS does not require any special hardware, whereas BitLocker benefits from having the TPM. As an additional distinction, EFS can encrypt just one file, if so desired, whereas BitLocker encrypts the whole volume and whatever is stored on it. Finally, EFS can be used in conjunction with BitLocker to further increase security.
FIGURE 18.24 Encrypting a file in Windows 10
The web browser is arguably the most used application on the Window
operating system. It is also your portal to the Internet and all the bad
things that reside outside of your network. Therefore, it makes sense
that there is an entire objective dedicated to web browser security for
the CompTIA exam. In this section we will explore various browser
security topics. We will primarily focus on the Microsoft Edge browser
that comes preinstalled in the Windows 10/11 operating system. However,
we will also reference Chrome, as it holds more than 60 percent of the
market share worldwide (https://gs.statcounter.com
)
as of this writing.
Edge is the successor to Internet Explorer and comes preinstalled in the Microsoft Windows 10/11 operating system. Edge is also the default web browser in the Windows 10/11 operating system. Although Edge comes preinstalled in Windows 10/11, there are circumstances when you need to download the Edge web browser and install it in other operating systems. A common scenario is downloading and installing Edge for a server operating system in which it is not preinstalled, such as Windows Server 2019 or another operating system, such as macOS or Linux.
To download the Edge installer, you can often just search for “Edge
download.” Depending on the search you use, the result can be in the top
three results, but malware can also be disguised as the download.
Therefore, you need to ensure your download is supplied by a trusted
source. You should only download the browser from the vendor, in this
case Microsoft. A simple way to check that the source is the vendor is
to check the URL in the address bar, as shown in Figure
18.25. If the address bar shows the parent address of Microsoft.com
, then you are
obviously downloading Edge from a trusted source.
FIGURE 18.25 Downloading Microsoft Edge
There are two main ways to download most browsers: online and
offline. The online version will initially download a small install
application (approximately 2 MB) that will download the rest of the
installation and install the remaining download. The online version
typically can be downloaded from the Microsoft Edge download page, as
previously described. An offline installation can be downloaded from
Microsoft as well and offers the benefit on not requiring any Internet
connectivity. An offline version can be downloaded from www.microsoft.com/en-us/edge/business/download
.
By downloading the offline version from the Microsoft trusted source,
you can be assured that future installations from this install are
genuine.
When you store the installation for future installation, you should protect the installer from being tampered with. The best preventive measure is to create a Secure Hash Algorithm (SHA) hash of the executable and store it in another location. Before you run the offline installer, simply run the hashing against the install and check its signature. If the signature matches, then the installer has not been tampered with. If the signature does not match, the tampering could have occurred and it should be treated as being from an untrusted source.
In the following exercise, you will get to use
the Get‐FileHash
cmdlet built into PowerShell. If you
already have a file hash and it is not in the SHA256 format, you can use
the argument of ‐Algorithm MD5
or
‐Algorithm SHA1
depending on the format you need to
verify.
In Exercise 18.3 you will create a hash for a sample file with a PowerShell cmdlet. Then you will slightly modify the file and hash it again and compare the outcomes.
Google Chrome and other third‐party browsers require download and installation from the vendor sites, just like Microsoft Edge. These practices can and should be employed with any installation, not just web browsers. You should always download installations from a trusted source, then protect them with file hashing to detect tampering.
The best feature in any web browser is the ability to extend its functionality to accommodate your exact needs. By using extensions and plug‐ins, you can functionally change the way the web browser works and the web pages it consequently renders. Extensions extend the functionality of the web browser that was not originally conceived when it was designed. Plug‐ins change the way web pages are rendered on the web browser. Each web browser will call these something slightly different—for example, plug‐ins might be called add‐ons, as in the case of Firefox. Google Chrome refers to both extensions and plug‐ins as extensions. Microsoft refers to both extensions and plug‐ins as add‐ons. For the remainder of this section, we will refer to all plug‐ins and extensions as add‐ons.
Most modern web browsers have an ecosystem of add‐ons. Google Chrome
has an ecosystem called the Chrome Web Store that can be accessed via https://chrome.google.com/webstore
.
Microsoft has an ecosystem called Edge Add‐ons that can be access via https://microsoftedge.microsoft.com
,
as shown in Figure 18.26. An add‐on ecosystem is a
place where the vendor trusts the publisher of an add‐on. The add‐on
ecosystem in turn distributes the add‐on for users, and supplies updates
and synchronized installations across multiple devices. The add‐on
ecosystem should always be considered a trusted source for web browser
add‐ons.
FIGURE 18.26 Microsoft Edge add‐ons
Most web browsers will also allow manual installation of add‐ons. This is also called sideloading, since you are manually installing the file outside of the ecosystem. This is usually done and allowed by the web browser for development purposes. This type of installation can be found among untrusted applications that the ecosystem doesn't allow. The add‐on might look reputable and the web page might explain that manual installation is required, since the developers don't agree to the terms of service (ToS) of the ecosystem. In any case, you should consider these add‐ons untrusted and therefore avoid them.
As you sign up for websites, they require more complex, lengthier passwords. Sometimes your username might be your favorite nickname, sometimes the nickname might not be available, and sometimes the site might require an email in lieu of a nickname‐style username. You should also employ the practice of single‐purpose passwords and never reuse a password. So, if your credentials on site A are compromised, then your credentials on site B will not be vulnerable.
No longer can we keep track of all these usernames and passwords. Luckily password managers have come to our rescue. They are built into every operating system and most web browsers. Microsoft Edge and Internet Explorer both use the Microsoft Credential Manager, which was originally introduced with Windows XP. It functions as a password manager and is built into the operating system. In Figure 18.27, you can see some web credentials stored by Microsoft Edge. You can access the Credential Manager by navigating to the Start menu, typing Control Panel and selecting it in the results, then clicking Credential Manager.
FIGURE 18.27 Microsoft Credential Manager
Credentials are stored by successfully logging into a website with a username and password combination. The web browser will ask if you want to save the credentials. Once the credentials are stored, when a website asks for a username and password matching the site in the Credential Manager, the associated credentials are offered to the user for logging into the site. If you are in the Credential Manager and you want to see the password, click Show and enter your credentials for the currently logged‐on user. By entering your credentials for the currently logged‐on user, you unlock the Credential Manager and it will let you see the password. You also have the option of deleting the credentials by clicking Remove.
Other web browsers, such as Google Chrome and Mozilla Firefox, have a credential manager built‐in. These web browsers do not use the Microsoft Credential Manager. They synchronize their usernames and passwords between installations of the web browser on various devices. You can also download and install a third‐party stand‐alone credential manager. KeePass is one example, and there are several freely available on the Internet.
Securing data transfers to and from the web browser is critical for the security of day‐to‐day operations. We use our web browsers for accessing sensitive information at work, doing our personal banking, accessing social media, and the list goes on. So, securing data in transit is a primary concern to manage the risk of eavesdropping from threat agents. The Hypertext Transfer Protocol over Secure Sockets Layer (SSL), also known as HTTPS, is a method for securing web data communications. SSL is a cryptographic suite of protocols that use public key infrastructure (PKI) to provide secure data transfer.
A PKI is a system that provides the key pair of private and public keys, also known as certificates. The certificates are used to validate communication and encrypt communications between web browsers and web servers. The private key (certificate) from the key pair is installed on the web server. The public key (certificate) is available to anyone who wants to validate the data encrypted with the private key that is installed on the server. The web browser will automatically download the public key for the cryptography process.
Because there is a level of trust involved with the public/private key pair, your web browser must initially trust the publisher of the key pair. The publisher of the key pair is known as the certificate authority (CA). Every web browser comes with an initial list of trusted root certificate authorities, which is a beginning of trust for issuing CAs. The issuing CAs are the CAs that actually issue the key pairs. In most cases the web browser will use the operating system's list of trusted root CAs, as shown in Figure 18.28. However, some web browsers maintain a list of their own trusted root CAs.
Each certificate has an expiration date, and from time to time, certificates expire, get replaced, or in some cases get revoked for various reasons. Therefore, the list of trusted root certificate authorities must be updated now and then. The Microsoft Windows platform does this with routine Windows Updates, but browsers may update their own lists with updates of their own.
FIGURE 18.28 Trusted root CAs
A lesson on web browser security wouldn't be complete without covering security settings for the web browser. In this final section, we will explore the various security settings you will find on modern‐day web browsers. We discuss the most common topics that you will see on the CompTIA exam, but in no way is this a complete list of security settings. The web browser is the most used application for accessing data, and new settings are introduced all the time to combat threat agents.
Pop‐up blockers are used to prevent both pop‐ups and pop‐unders from appearing when you visit a web page. While older browsers did not incorporate an option to block pop‐ups, most current browsers, including the latest versions of Edge and Chrome, have that capability built in.
By default, the pop‐up blocker is enabled on all sites globally. However, some sites, such as banking websites, might need to pop up a web page. When this happens, the address bar will notify you that an attempted pop‐up occurred. You can then click and allow the specific site to use pop‐ups. If you want to view a site's permission for pop‐ups, click the lock at the left of the address bar and choose Permissions For This Site, as shown in Figure 18.29.
FIGURE 18.29 Permissions for a site
Browsing data is a broad term used to describe any data stored while visiting websites. Data that may be considered browsing data is browsing history, download history, cookies, image and file caches, passwords, autofill form data, and site settings. These are just the top categories, so keep in mind that browsing data is a broad term. Browsing history is used by the browser to highlight links you've already visited. Download history allows you to historically see what has been downloaded. Cookies are used to save settings used by the web page, such as themes and login data, to name a few. Passwords are saved for convenience when logging into websites, the same as autofill form data. The site settings are adjustments you've made to the browser to optimally render the web page.
There are various reasons you may want to clear browsing data. The most compelling reason is data privacy. However, it is sometimes necessary to clear browsing data when you are trying to replicate a problem seen with the web browser.
You can clear the browsing data by clicking the three dots in the upper‐right corner of the Edge web browser. Then click Settings, choose Privacy, then Search And Services, and scroll down to Clear Browsing Data Now and select Choose What To Clear. You will be presented with a dialog box similar to Figure 18.30. Options for the time range are Last Hour, Last 24 Hours, Last 7 Days, Last 4 Weeks, and All Time. You can also selectively delete the web browsing data that you desire.
FIGURE 18.30 Clearing browsing data
When a web browser renders a web page, the files retrieved are cached. This is done so that if you need them again you can quickly retrieve them from storage. This caching mechanism speeds up the web browser and reduces unneeded trips to the Internet. There are times when you need to clear your web browser cache, such as when developing a web page. You will want to retrieve the latest copy of the web page and its assets so that you can verify how it is rendered. The cache images and files are part of the web browsing data that can be cleared. The process is similar to the previously mentioned process for clearing browsing data, except only the cache images and files will be cleared.
Private‐browsing mode was created primarily to address data privacy while you are web browsing. The most compelling feature of private‐browsing mode is that it does not store any web browsing data. Therefore, when you close private‐browsing mode, all browsing data is destroyed.
The private‐browsing mode on the Edge web browser is called InPrivate browsing; Chrome calls it Incognito mode. One the Edge web browser you can enter InPrivate mode by clicking the three dots in the upper right‐hand corner of the window, then selecting New InPrivate Window. Another way is to press Ctrl+Shift+N to open the InPrivate window. In either case, the window explains the mode, as shown in Figure 18.31. The methods to open a private‐browsing window on Chrome are identical, except that it is called Incognito mode.
FIGURE 18.31 InPrivate browsing
On average, we access our web browsers on several devices. You may access them on your home computer, personal laptop, and mobile device, just to name a few. Therefore, the browsing data should follow you, no matter which device you are using. Modern web browsers fortunately support browser data synchronization, which allows you to share your data across many different devices. To sync your device, you just need to make sure that you have signed in and acknowledged that you want to sync browsing data. You can verify this by clicking on your account in the upper‐right corner of the web browser. Your account details will be displayed, as shown in Figure 18.32. If you are not syncing, you can click the link Turn On Sync.
FIGURE 18.32 Account details
Use caution before syncing your personal web browsing data with your work computer's. You should keep a strict separation between work and leisure. During our leisure time we might search for something that could be misconstrued as mischievous to our coworkers. Therefore, you should maintain different accounts for work and play and never sync your personal account to a work web browser.
The Internet was originally embraced as a mechanism to share information and bring people together. However, it was equally embraced by commerce as a mechanism to deliver products and services. One such service is the marketing of other services and products; we get these as marketing ads.
You can install an ad blocker add‐on for most web browsers to stop a lot of the spammy ads you might receive on a web page. However, entire websites, such as many news sites, subsidize their income with marketing ads. If you don't have a subscription, then you have to allow marketing ads in order to enjoy their content; it seems a pretty fair trade. Most of these sites allows you have directions on how to exempt their site from the ad blocker.
You may wonder why you should install an ad blocker at all if every site allows you to exempt it. Ad blockers are extremely useful as an added layer of insulation from threat actors. Although most sites are legitimate and serve relevant ads, there are plenty of sites that serve malicious ads. An ad blocker helps to block all ads and allows you to judge whether the site is worthy of an exemption.
CompTIA wants administrators of small office, home office (SOHO) networks to be able to secure those networks in ways that protect the data stored on them. This objective looks at the security protection that can be added to a wireless SOHO network, while the following section examines similar procedures for a wired network.
A wireless network is not and never will be secure. Use wireless only when absolutely necessary. If you must deploy a wireless network, here are some tips to make improvements to wireless security:
In addition to those created with the installation of the operating system(s), default accounts are also often associated with hardware. Wireless access points, routers, and similar devices often include accounts for interacting with, and administering, those devices. You should always change the passwords associated with those devices and, where possible, change the usernames.
If there are accounts that are not needed, disable or delete them. Make certain that you use strong password policies and protect the passwords with the same security that you use for users and administrators. (In other words, don't write the router's password on an address label and stick it to the bottom of the router.)
All radio frequency (RF) signals can be easily intercepted. To intercept 802.11 wireless traffic, all you need is a PC with an appropriate 802.11 card installed. Many networks regularly broadcast their names (known as an SSID broadcast) to announce their presence. Simple software on the PC can capture the link traffic in the wireless AP and then process this data to decrypt account and password information.
You should change the SSID—whether or not you choose to disable its broadcast—to keep it from being a value that many outsiders come to know. If you use the same SSID for years, then the number of individuals who have left the company or otherwise learned of its value will only increase. Changing the variable adds one more level of security.
Most guests in your network never need to connect to the organization's servers and internal systems. When guests connect to your wireless network, it is usually just to get connectivity to the Internet. Therefore, a guest service set identifier (SSID) should be created that isolates guest traffic from production traffic. These guest network SSIDs are usually created by default on consumer wireless devices. On enterprise wireless LAN controllers, the guest network typically needs to be created.
Some considerations for the guest network are what is open to guests, how long they have access, how much bandwidth, the SSID name, and the list goes on, depending on your organization. Guest networks usually don't give totally unrestricted Internet access; certain sensitive ports like TCP port 25 (SMTP) are normally blocked. The length of time they have access is another concern. Generally, a guest is just that, a guest. So, 4 hours, 8 hours, or 24 hours of access seem responsible. You should give this a lot of thought because too short a time will create administrative overhead and too long a window of access allows for abuse of service. If you don't expect guest access to your wireless network, then it should be disabled.
It's important to remember that you should always enable encryption for any wireless network that you administer. Choose the strongest level of encryption you can work with. The following are some wireless protocols that you might encounter when securing wireless:
Wi‐Fi Protected Access Wi‐Fi Protected Access (WPA) was standardized by the Wi‐Fi Alliance in 2003 in response to the vulnerabilities in Wired Equivalent Privacy (WEP). WPA uses 256‐bit keys versus the 64‐bit and 128‐bit keys WEP used previously. WPA operates in two modes for security: preshared key (PSK), also called personal mode, and enterprise mode. PSK is the most common mode, because it can easily be implemented with a mutual agreed‐upon passphrase. Enterprise mode, also called WPA‐802.1X, requires a certificate server infrastructure. Enterprise mode uses the 802.1X protocol, RADIUS, and EAP; it is often used in corporate environments.
WPA introduced many improved security features over WEP, such as message integrity checks (MICs), which detect packets altered in transit. WPA also introduced Temporal Key Integrity Protocol (TKIP), which uses the RC4 algorithm for encryption. TKIP provides per‐packet keying to prevent eavesdropping on wireless conversations. However, despite the improvements in security, WPA is considered exploitable and is no longer used for wireless security. A common exploit used against WPA is an attack on the helper protocol of Wi‐Fi Protected Setup (WPS). WPS is used for consumer ease of setup and should be turned off for security purposes.
Wi‐Fi Protected Access 2 (WPA2) WPA2, also known as 802.11i, is the successor to WPA. WPA was deprecated in 2006 when WPA2 became a wireless security standard. Just like WPA, WPA2 operates in both personal mode (PSK) and enterprise mode.
WPA2 uses the Advanced Encryption Standard (AES) algorithm to protect data. AES is more secure than the RC4 algorithm used with TKIP. WPA2 replaced TKIP with Counter Cipher Mode (CCM) with Block Chaining Message Authentication Code Protocol (CCMP). However, TKIP can be configured as a fallback for WPA backward compatibility. Like WPA, WPA2 is exploitable if the WPS service is enabled. WPS should be turned off for security purposes.
One method of “protecting” the network that is often recommended is to turn off the SSID broadcast. The access point is still there and can still be accessed by those who know about it, but it prevents those who are looking at a list of available networks from finding it. This should be considered a very weak form of security because there are still ways, albeit a bit more complicated, to discover the presence of the access point besides the SSID broadcast.
Most APs offer the ability to turn on MAC filtering, but it is off by default. In the default state, any wireless client that knows of the existence of the AP can join the network. When MAC filtering is used, the administrator compiles a list of the MAC addresses associated with the users' computers and enters them. When a client attempts to connect, an additional check of the MAC address is performed. If the address appears on the list, the client is allowed to join; otherwise, they are forbidden from so doing. On a number of wireless devices, the term network lock is used in place of MAC filtering, but the two terms are synonymous.
The frequencies used with wireless local area networks (WLANs) vary by standard. The two main frequencies used are 2.4 GHz and 5 GHz. The 2.4 GHz frequencies are governed by the industrial, scientific, and medical (ISM) radio bands. The 5 GHz frequencies are governed by the Unlicensed National Information Infrastructure (U‐NII) radio band. It is important to note that in the future, 6 GHz frequencies will be used with the second release of 802.11ax called Wi‐Fi 6E.
The 2.4 GHz spectrum is governed by the ISM radio band. The 802.11b/g/n standards operate on 2.4 GHz frequencies. The band consists of 14 channels 22 MHz wide. In North America only the first 11 of the channels can be used for wireless. In Japan all 14 channels can be used, and almost everywhere else in the world the first 13 channels can be used. Only 3 of the 14 channels are considered nonoverlapping, as seen in Figure 18.33. The channels of 1, 6, and 11 are considered prime channels for WLAN because they do not overlap with the other channels in the channel plan.
FIGURE 18.33 The 2.4 GHz channel plan
The 5 GHz frequencies are governed by the Unlicensed National Information Infrastructure (U‐NII) radio band. The 802.11 a/n/ac/ax standards operate on the 5 GHz frequency spectrum.
As seen in Figure 18.34, the band consists of 25 nonoverlapping channels. In North America the 802.11a standard can function on 12 channels consisting of 36, 40, 44, 48, 52, 56, 60, 64, 149, 153, 157, and 161. Each regulatory domain restricts the number of channels and specific channels for the region.
In North America, the 802.11ac standard can use 25 of the nonoverlapping channels. In Europe and Japan, the channels are limited to the U‐NII 1, U‐NII 2, and U‐NII 2E list of channels. The 802.11n standard only allowed the first 24 channels in North America, because channel 165 is in the ISM band.
FIGURE 18.34 The 5 GHz channel plan
Speeds will always be higher on 5 GHz wireless, such as the 802.11 a/n/ac standards. The 802.11ac standard can use a maximum of 25 nonoverlapping channels. However, the 802.11 b/g/n standards are restricted to the 2.4 GHz wireless band with three nonoverlapping channels. You should always be aware of the airspace around your AP. If you are overlapping on a channel, you should make every effort to change the channel on your AP. By changing the channel, you will effectively increase the speed of the connection for all your users within the cell.
Lower frequencies will go further in distance, yet as you go further away from the AP your speed will suffer. High frequencies will tend to go shorter distances, and when you move further away from the AP, speed will sharply decline. Using higher frequencies allows you to lower power and decrease the changes of signals traveling to a public place.
Consider the radio power level, since the wireless access point has better transmitting power than most mobile devices. Let's use the analogy of two people standing in a field. One person is using a bullhorn to ask the other person the time, and the other person only has their voice to respond. Although they can hear the request, they will not be heard when they answer because they don't have a bullhorn. To fix this problem, the wireless access point should have its power level adjusted so that the client needs to be closer to receive data or associate with the SSID.
From a security standpoint, power levels should be adjusted so that they do not travel past the interior of the organization's building. If they do, then someone sitting in the parking lot or an adjoining building could attempt to infiltrate the wireless network. On the chance that the signal is actually traveling too far, some access points include power level controls that allow you to reduce the amount of output provided.
Antenna placement can be crucial in allowing clients to reach the access point. For security reasons, you do not want to overextend the reach of the network so that people can get on to the network from other locations (the parking lot, the building next door, and so on). Balancing security and access is a tricky thing to do.
There isn't any one universal solution to this issue—it depends on the environment in which the access point is placed. As a general rule, the greater the distance the signal must travel, the more it will attenuate; however, you can lose a signal quickly in a short space as well if the building materials reflect or absorb it. You should try to avoid placing access points near metal (which includes appliances) or near the ground. They should be placed in the center of the area to be served and high enough to get around most obstacles.
Wireless site surveys can be performed with specialized software that records the strength of the signal on a map you import of the coverage area. Wireless surveys should be performed in your environment before access point placement to help you determine the best placement. They should also be performed after access point placement for fine‐tuning of the location and verification of uniform signal strength.
While DHCP can be a godsend, a SOHO network is small enough that you can get by without it issuing IP addresses to each host. The advantage to assigning the IP addresses statically is that you can make certain which host is associated with which IP address and then utilize filtering to limit network access to only those hosts.
WPS (Wi‐Fi Protected Setup) can help to secure the network by requiring new machines to do something before they can join the network. This often requires the user to perform an action in order to complete the enrollment process: press a button on the router within a short time period, enter a PIN number, or bring the new device close‐by (so that near‐field communication can take place). It should be noted that during this brief period the wireless access point is susceptible to an attack. Anyone with the passcode or the ability to guess the passcode can gain access to your wireless network. Therefore, WPS is not used outside of SOHO networks and is not found in corporate networks.
When setting up a wireless network, you are extending a wired network to a wireless network. Therefore, you must consider how users will authenticate to the wireless network. The following are various concepts of authentication that can be used with wireless and wired networks (though they are more commonly used with wireless networks):
Something you do
Setting up a wireless network based on a preshared key limits you to a single authentication factor, which everyone consequently shares. The use of a preshared key is a great example of an authentication factor of something that you know. Unfortunately, a preshared key can be compromised because people must share it, and you only have one preshared key per wireless SSID.
RADIUS Remote Authentication Dial‐In User Service (RADIUS) was originally proposed as an Internet Engineering Task Force (IETF) standard. It has become a widely adopted industry standard for authenticating users and computers for network systems. RADIUS creates a common authentication system, which allows for centralized authentication and accounting.
The origins of RADIUS are from the original ISP dial‐up days, as its acronym describes. Today, RADIUS is commonly used for authentication of virtual private networks (VPNs), wireless systems, and any network system that requires a common authentication system. RADIUS operates as a client‐server protocol. The RADIUS server controls authentication, authorization, and accounting (AAA). The RADIUS client can be wireless access points, a VPN, or wired switches. The RADIUS client will communicate with the RADIUS server via UDP port 1812 for authentication and UDP port 1813 for accounting.
The RADIUS server can be installed on many different operating systems, such as Linux and Windows. Microsoft Windows Server includes an installable feature, called the Network Policy Server (NPS), that provides RADIUS functionality.
Although a wired network can be more secure than a wireless one, there are still a number of procedures that you should follow to leave as little to chance as possible. Among them, change the default usernames and passwords to different values and secure the physical environment. You should also disable any ports that are not needed, assign static IP addresses, use IP filtering, and use MAC filtering to limit access to hosts that you recognize.
When installing a network device, the very first thing you must do is log in to the device. There is often a standardized default username and password for each vendor or each vendor's product line. Most devices make you change the default password upon login to the device.
Changing the default password to a complex password is a good start to hardening the device. However, changing the username will also ensure that a brute‐force attack cannot be performed against the default username. There are many different websites dedicated to listing the default credentials for network devices, so it doesn't take tremendous skill to obtain the default username and password of a device.
The hosts in the network are no exception to changing default usernames and passwords. In Windows, the Guest account is automatically created with the intent that it is to be used when someone must access a system but lacks a user account on that system. Because the Guest account is so widely known to exist, you should not use this default account but instead create another account for the same purpose if you truly need one. The Guest account leaves a security risk at the workstation and should be disabled to deter anyone attempting to gain unauthorized access.
When you purchase a network device, you don't know how long it's been sitting on the shelf of a warehouse. In that time, several exploits could have been created for vulnerabilities discovered. It is always recommended that a device's firmware be upgraded before the device is configured and put into service.
Most hardware vendors will allow downloading of current firmware. However, some vendors require that the device be covered under a maintenance contract before firmware can be downloaded. It is also best practice to read through a vendor's change log to understand the changes that have been made from version to version of firmware.
IP filtering, also known as firewall rules, helps secure the internal network from an external network. The external network could be the Internet, or it could be a network less trusted than the internal network, such as a wireless network. In any case, firewall rules help harden the security of an organization because we can restrict activity from the external network to specific applications.
Firewall rules are normally configured with an implicit deny at the end of the rules set. This means that if an application has not explicitly been allowed, it will automatically (implicitly) be denied. The easy way to remember what implicit means is that it implies there is a deny, unless a user or application has been explicitly allowed. This implicit deny operation of a firewall is the default for firewall rule sets.
Although there is an implicit deny at the end of firewall rule sets, you may also need to explicitly deny an application or IP address. An explicit deny is required when another explicit rule follows, allowing access to a wide range of applications or IP addresses. For example, you may want to allow access to all the servers from the client networks. However, you should explicitly deny applications the clients shouldn't have access to, such as Remote Desktop Protocol (RDP) or Secure Shell (SSH).
Disable all protocols on the network device that are not required. As an example, many network multifunction printers are preconfigured with a multitude of protocols, such as TCP/IP, Bonjour, and Internet Printing Protocol (IPP), just to name a few. If you don't need them, remove the additional protocols, software, or services, or prevent them (disable them) from loading. Ports on a switch, router, or firewall not in use present an open door for an attacker to enter and should be disabled or disconnected.
Limiting access to the network by employing IP filtering is not the only way to restrict access. MAC address filtering can also be employed to restrict traffic to MAC addresses that are known and to filter out those that are not. Even in a home network, you can implement MAC filtering with most routers and typically have an option of choosing to allow only computers with MAC addresses that you list or deny only computers with MAC addresses that you list.
Content filters are useful in networks to restrict users from viewing material that is non–work‐related, questionable, or malware. Content filtering is usually dictated by organization policy and management. The content filter operates by watching content and requests from web browsers and other applications. The content filter functions in two ways: The first is content based; when images and text are requested from a website, the content filter can use heuristic rules to filter the content according to administrator‐set policies. The second method is URL based, which is much more common since many websites now use SSL/TLS (encryption) and the traffic is encrypted. Content filters are typically purchased with a subscription that provides updates to the categories of material administrators block. Content filters can be hardware solutions or software solutions, although it is common to find them installed as software solutions.
The screened subnet is also known as the demilitarized zone (DMZ). The DMZ gets its name from the segmentation that is created between the exterior and the interior of the network. This is similar to where borders of two opposing countries meet with military presence on both sides. Between the two sides there is a neutral segment called the DMZ. As it pertains to a network, hosts that serve Internet clients are placed in the DMZ subnet. As shown in Figure 18.35, a network segment called the DMZ sits between an external firewall and the internal firewall. The external firewall contains ACLs to prevent Internet hosts from accessing nonessential services on the server in the DMZ. The internal firewall restricts which hosts can talk to internal servers. A typical rule on the external firewall would allow HTTP access for a web server in the DMZ and would restrict all other ports. A typical rule on the internal firewall would allow only the web server to communicate with the SQL backend database in the internal network.
FIGURE 18.35 A typical DMZ with two firewalls
Although the concept of the DMZ is still used today in network design, a screened subnet can be created between any two segments in the network. The subnets don't necessarily need to be external and internal in relation to the network. Routers containing ACLs can be implemented in lieu of firewalls to filter traffic to the screened subnet, as shown in Figure 18.36. In the figure, a network called Network A is segmented from the screened subnet by a router with ACLs filtering traffic. On the other side of the screened subnet is another network called Network B, and it too is segmented by a router with ACLs filtering traffic. Each of these two networks have equal access to the hosts in the screened subnet. These two networks, Network A and Network B, could potentially be a wireless network and the wired network, respectively.
FIGURE 18.36 A typical screened subnet with two routers
Some screened subnets are just another interface on a single firewall, as shown in Figure 18.37. In this example, the rules for both the Network A subnet and the Network B subnet would be on the same firewall. The benefit of a single firewall is centralized administration of firewall rules. Each interface is placed into a trust zone, and the firewall rules allow incoming and outgoing connections.
FIGURE 18.37 A typical screened subnet with one firewall
On the router, the port configuration dictates what traffic is allowed to flow through. The router can be configured to enable individual port traffic in, out, or both; when you implement this type of configuration it is referred to as port forwarding. If a port is blocked (such as port 80 for HTTP or port 21 for FTP), the data will not be allowed through, and users will be affected. Port forwarding is also known as port mapping, and both are subsets of what a firewall does, and the amount of tweaking they require to get right is about the same.
Port forwarding is required when you are trying to share a network
service with the Internet, such as a web server. It is commonly required
when gaming, since some games require you to set up an impromptu game
server. The players on the Internet will have to connect to the ports on
your local machine behind your router, through port forwarding. There
are websites dedicated to configuring these port forwarding settings
manually; an example is https://portforward.com
.
If you need to configure port forwarding frequently, then you might want to skip manually port forwarding altogether. The Universal Plug and Play (UPnP) is a network protocol that allows for automatic configuration of port forwarding. Most modern‐day routers have the feature turned on, which is a security concern if you are not using any port forwarding. The UPnP protocol operates by the clients initiating a connection to the router and communicating the ports needing to be forwarded to the client. The router then opens the ports and forwards them to the client.
Dynamic Host Configuration Protocol (DHCP) is responsible for automatic configuration of IPv4 IP addresses and subnet masks for hosts from a pool of IPv4 addresses. It is also responsible for configuration of such options as default gateways, DNS server addresses, and many other IP‐based servers. It performs configuration of the host in a series of network broadcasts and unicasts.
When a client requests an IP address from a DHCP server, the client's MAC address is transmitted in the DHCP packet. A rule on the DHCP server called a DHCP reservation can tie the client's MAC address to a particular IP address. When a reservation is created for a client, the client is guaranteed to obtain the same IP address every time for the DHCP process. When a reservation is created on the DHCP server, no other hosts can obtain the reservation IP address unless they have the MAC address that matches the reservation. This type of assignment is considered a dynamically static–assigned IP address.
Reservations can be very handy when static IP addresses are too troublesome to configure, such as network printers with poor configuration options. It's common to set a reservation on network printers and move on when faced with a finicky static IP address process. You can save an hour of busy work in the right situation. Reservations can also be useful when you need to make specific firewall rules for a client based on its IP address.
Dynamic IP addressing is the standard in small‐to‐large networks when configuring client computers. Static IP addressing should only be used under certain circumstances for client computers, since it is not very scalable and a nightmare to keep track of manually. DHCP allows for central management of the IP address space versus static assignment of individual hosts (which is decentralized). Static IP addressing should only be used on internal network resources such as routers, network printers, and servers.
Static IP addressing can be useful for wide area network (WAN) connections, also known as your connection to the Internet. If a server is operating at the location, a static IP address is necessary for clients to be able to connect. Name resolution to the IP address is the biggest driver for static IP addressing. There are work‐arounds, such as dynamic DNS services, but the best solution is to purchase a static IP address from the Internet provider.
Just as you would not park your car in a public garage and leave its doors wide open with the key in the ignition, you should educate users to not leave a workstation that they are logged into when they attend meetings, go to lunch, and so forth. They should log out of the workstation or lock it. “Lock when you leave” should be a mantra they become familiar with. A password (usually the same as their user password) should be required to resume working at the workstation.
You can also lock a workstation by using an operating system that provides filesystem security. Microsoft's earliest filesystem was referred to as File Allocation Table (FAT). FAT was designed for relatively small disk drives. It was upgraded first to FAT‐16 and finally to FAT‐32. FAT‐32 (also written as FAT32) allows large disk systems to be used on Windows systems.
FAT allows only two types of protection: share‐level and user‐level access privileges. If a user has write or change access to a drive or directory, they have access to any file in that directory. This is very unsecure in an Internet environment.
With NTFS, files, directories, and volumes can each have their own security. NTFS's security is flexible and built in. Not only does NTFS track security in access control lists (ACLs), which can hold permissions for local users and groups, but each entry in the ACL can also specify which type of access is given. This allows a great deal of flexibility in setting up a network. It's advisable to use BitLocker to encrypt the device's storage whenever possible. In addition, special file‐encryption programs can be used to encrypt data while it is stored on removable hard disk.
Microsoft strongly recommends that all network shares be established using NTFS. While NTFS security is important, however, it doesn't matter at all which filesystem you are using if you log into your workstation and then leave, allowing anyone to sit at your desk and use your account. If the computer is in your home office and it's unlocked, you may come back to find your cat is lying on the Delete key. So, the rule of thumb is lock your operating system when you leave the keyboard.
Last, don't overlook the obvious need for physical security. Adding a cable to lock a laptop to a desk prevents someone from picking it up and walking away with a copy of your customer database. Every laptop case we are aware of includes a built‐in security slot in which a cable lock can be added to prevent it from easily being carried off the premises. If this is in your home office, it might deter a burglar from walking away with sensitive information.
When it comes to desktop models, adding a lock to the back cover can prevent an intruder with physical access from grabbing the hard drive or damaging the internal components. You should also physically secure network devices, such as routers, access points, and the like. Place them in locked cabinets, if possible—if they are not physically secured, the opportunity exists for an unauthorized person to steal them or manipulate them to connect to the network.
Apple computers have a pretty decent reputation in the industry for being somewhat resistant to malware. Whether or not this is because of the relatively small installed base or the ease with which hackers penetrate “other” operating systems, this characteristic carries over to Apple's mobile devices. In fact, hackers don't seem to be as interested in attacking the legions of mobile devices as much as they have gone after the Windows operating systems that drive the vast majority of laptops, desktops, and servers in the world. Nevertheless, attacks occur. Coupled with how easy mobile devices are to misplace or steal, it behooves users to have proactive monitoring and contingency plans in place.
The following sections detail the built‐in security utilities that are common in today's mobile devices. Furthermore, for threats not covered by the software with which the devices ship, the protection available from third‐party utilities is worth discussing.
Apple and Android mobile devices include a requisite locking mechanism, which is off by default. The user on the go is encouraged to enable a lock. If your device acts primarily as a home computing device and rarely goes with you out the door, there is very little reason to set a lock. However, knowing how to do so is important. The following are types of locks that you can implement to secure your device:
Exercise 18.4 outlines the steps for creating a code for your iPhone.
The same general concept for Android phones is illustrated in Exercise 18.5.
Should your work or personal mobile device disappear or fall into the wrong hands, it's always nice to have a backup plan to ensure that no company secrets or personal identifiers get misused by anyone who would use the information with ill will. Apple supplies a free app called Find My iPhone that, together with iCloud, allows multiple mobile devices and Macs to be located if powered on and connected to the Internet (via cellular, Wi‐Fi, Ethernet, and so on). The app allows the device to be controlled remotely to lock it, play a sound (even if audio is off), display a message, or wipe it clean.
Within a newer iPhone's Settings screen, you can find an iCloud settings page and select the Find My iPhone switch. If this switch is off, the Find My iPhone app and iCloud web page will be unable to find your device.
On the login screen for the iPhone app, you must enter the iCloud account information that includes the device you are attempting to control remotely.
Note that when you change the password for your Apple ID, it affects your iCloud account but does not transfer automatically within your device. Always remember to update your iCloud account information on each device when you update the associated Apple ID.
The iCloud website's login page (www.icloud.com
) calls for
the same credentials as the app. You are signing in with HTTPS, so your
username and password are not traversing the Internet in the clear. With
the switch in the iCloud settings screen set to Off for all devices on
your account, when you sign in to the app with your iCloud account
credentials, you are met with a disabling switch message.
You do not need to go to the website if you have another device with the Find My iPhone app or can borrow one from someone else. The device forgets your credentials when you log out, so the owner will not be able to control your device the next time they use the app.
After logging into the iCloud website, you can click the icon that matches the icon for the Find My iPhone app in iOS. Assuming that you've made it into the app on another device and your Find My iPhone feature is enabled on your missing device, the Info screen tells you that your device has been found and gives you options for the next step you take.
Tapping the Location button in the upper left shows you a map of where your device is currently located. You have three options for how to view the location: a two‐dimensional map, a satellite map, and a hybrid version, where the two‐dimensional street‐name information is laid over the satellite view.
If you tap the Play Sound or Send Message button on the Info screen, instead of the Location button, the screen that pops up allows you to display a message remotely. You might consider first displaying a message without the sound, which is at maximum volume. Ask in the message to be called at another number. If you hear from someone in possession of your device, the hunt is over. Otherwise, send another message with the tone to get the attention of the nearest person. If you are at the reported location when you generate the sound, it can help you home in on the device.
If you do decide to use the remote‐lock feature of the app, you'll have the opportunity to reconsider before locking it. You should have no issue with locking the device. Doing so does not prevent you from using the app further; it simply makes sure that the device is harder to break into.
Should you decide to take the sobering step of destroying the contents of the device remotely, you get a solemn notice allowing you the opportunity to reassess the situation before proceeding. Sometimes there's just no other option.
For Android devices, Google's Find My Device app performs many of the same functions as Apple's Find My iPhone app, including playing your ring tone for 5 minutes. When you search Google's web page for “find my device,” a map will display the location of your device. Clicking the location will reveal the SSID the phone is connected to, the phone's battery level, and when the phone was last in use at that location. You can then choose to play a sound (ring tone), secure the device, or erase the device. Securing the device will lock the phone and sign you out of your Google account. This option can also allow you to display a message on the lock screen. Erasing the device is the last step when you know it cannot be recovered and you want to be assured your data is removed.
Apple iOS devices automatically back themselves up either to a computer running iTunes that they sync with or to the iCloud account associated with the device. When a mobile device is connected to a computer containing iTunes, the process of backup is called synchronization. iCloud Backup is the most common method of backing up Apple iOS devices, since no computer is necessary. iCloud does require a Wi‐Fi connection or cell service. To enable iCloud Backup, navigate to Settings, then tap your name, tap iCloud, and finally tap iCloud Backup. You can create an immediate backup by tapping Back Up Now, or you can wait until the next backup interval.
The Android operating system will automatically synchronize the device to Google Drive. Android phones require a Google account during setup. This account is the account that will synchronize to Google Drive. Google Drive is Google's cloud‐based storage product. The backup service will back up Wi‐Fi passwords, phone logs, app settings, contacts, messages, pictures, and other related files. A multitude of third‐party backup apps can be downloaded from Google's Play Store. Each third‐party app offers different features over and above the built‐in backup service functionality, such as the capability to remove the bloatware apps that come with the phone.
After you've set a screen lock, an optional step is to set the device to wipe, or factory reset, after a number of failed attempts. This option will wipe local data on the device if incorrect passcodes are entered 10 times in a row, or perform a factory reset depending on the device. While this is recommended for users with devices that contain sensitive data and that are frequently taken into public venues or placed in compromising positions, the casual user should not turn this feature on unless they can be sure that a recent backup will always be available in iTunes or Google.
Imagine a user's child or a mischievous, yet harmless, friend poking away at passcodes until the device informs them that it is being wiped clean; it's not for everyone. Restoring from a backup is easy enough, but will a recent backup be available when disaster strikes? Apple performs a backup to the iCloud or the computer running iTunes that the iOS device syncs with.
Apple imposes cooling‐off timeout periods of increasing duration, even if the Erase Data feature is disabled and you or someone else repeatedly enters the wrong code over multiple lockouts. The final penalty with the Erase Data feature disabled is that you cannot unlock the device until it is connected to the computer with which it was last synced.
When a passcode is set, Android devices have a similar approach to failed login attempts as their Apple counterparts. A factory reset will occur after 10 failed attempts.
The difference is that if waiting the timeout period won't help because you've forgotten the pattern or code, this device can tie your access back to the Google account you used when setting it up. This is also the account where you receive purchase notifications from Google Play, and it does not have to be a Gmail account (one of the benefits of the open‐source nature of Android). If you remember how to log into that account, you can still get into your phone. Or at least you can investigate the credentials to that account on a standard computer and return to the device to enter those credentials.
For the most part, mobile devices have been left alone with viruses and malware, as compared with the Windows platform. However, this is not a reason to let your guard down and not worry about viruses and malware on mobile devices. Mobile devices can contract a virus and malware through the installation of a malicious app.
Most malware installed on mobile devices will spam your device with ads from the Internet. However, some malware can expose your personal information and even subject your device to the control of a malicious individual. Viruses, on the other hand, will use your email application to send copies of itself or turn your device into a zombie.
The extent of the damage from malware or a virus is hard to estimate, but one thing is for sure: protection goes a long way. Antivirus and antimalware software should be installed on your mobile device to thwart malicious attempts to infiltrate your device. All the leading vendors, such as AVG, Norton, and Avast (just to name a few), have offerings of antivirus and antimalware. Most installations of these apps are free, and other services or features can be purchased within the apps.
You can limit your exposure to malicious apps by only installing apps from trusted sources, such as the Apple's App Store or Google Play. In most cases, the mobile operating system must specifically be configured to accept installations from untrusted sources. So, it's not likely that you will mistakenly install an app from an untrusted source. Examples of untrusted sources include manual installs of Android APK (Android Package Kit) or untrusted Apple IPA (iOS App Store Package) files. These files can be distributed outside of the Google Play and Apple's App Store ecosystems. When files are distributed outside of these ecosystems, consider them untrusted.
It's easy to forget that these tiny yet powerful mobile devices we've been talking about are running operating systems that play the same role as the operating systems of their larger siblings. As such, users must be careful not to let the operating systems go too long without updates. Occasionally, mobile devices will notify the user of an important update to the operating system. Too often, however, these notifications never come. Therefore, users should develop the habit of checking for updates on a regular basis.
Not keeping up with software updates creates an environment of known weaknesses and unfixed bugs. Mobile devices operate on a very tight tolerance of hardware and software performance. Not maintaining the device for performance at the top of its game will tend to have more pronounced repercussions than those seen in larger systems.
For the iPhone, iPod Touch, and iPad, you can check for the most important level of updates by tapping Settings ➢ General ➢ Software Update. For the Android operating system, there are multiple updates that can be checked for manually. All of them are accessible by following Menu ➢ Settings ➢ System Updates.
As discussed previously, BitLocker and BitLocker to Go greatly enhance security by encrypting the data on drives (installed and removable, respectively) and helping to secure it from prying eyes. At a minimum, the same level of protection that you would apply to a desktop machine should be applied to a mobile device, because it can contain confidential and personally identifiable information (PII), which could cause great harm in the wrong hands.
Full‐device encryption should be done on laptops and mobile devices, and you should back up regularly to be able to access a version of your files should something happen to the device itself. When full‐device encryption is turned on, both the device and the external storage (SD card) are encrypted. The only way to view the information on the SD card is from the encrypting mobile device.
Multifactor authentication, mentioned previously in this chapter, involves using more than one item (factor) to authenticate. An example of this would be configuring a BitLocker to Go–encrypted flash drive so that when it is inserted into your laptop, a password and smartcard value must be given before the data is decrypted and available.
An authenticator app works with mobile devices to generate security codes that can keep accounts secure by requiring two‐factor authentication (2FA). Once this is set up, your account will require a code from the app in addition to your account password. Authenticator applications are available for download with Apple, Android, and other mobile device operating systems, as well as desktop operating systems. An account is usually added to the authenticator application by entering a secret key or scanning a QR barcode; this creates the account in the authenticator application. Several different authenticator apps can be downloaded; they vary depending on the mobile operating system.
Mobile devices have the same inherent vulnerabilities as desktop operating systems. However, mobile devices are generally not targeted the same as desktop operating systems. When a mobile device starts up, it is configured with an IP address via either the internal cellular radio or the Wi‐Fi radio, so the device can be exploited via the network. Therefore, a firewall should be installed and turned on to protect the mobile device.
However, inbound communication is not the only aspect of security you need to be concerned about on mobile devices. Outbound communication also is a primary concern. A mobile device firewall app will allow you to monitor both the inbound and outbound communications on your mobile device. Several third‐party firewall apps (both free and for purchase) can be downloaded from either Apple's App Store or Google Play.
With the explosive growth of mobile devices in the workplace, there are many different policies and procedures that may be required for your organization to minimize data loss. This section focuses on the policies and procedures specific to mobile devices that you may have in your organization.
As employees are hired in your organization, a certain amount of initial interaction with the information technology (IT) is required. This interaction, called the onboarding procedure, is often coordinated with the Human Resources (HR) department. During the onboarding procedure, the IT representative will help the user log in for the first time and set their password. The password policy is often the first policy discussed with the user. Other policies—such as bring your own device (BYOD), acceptable use policy (AUP), nondisclosure agreement (NDA), and information assurance—should also be discussed during the onboarding procedure. The use of email and file storage and policies should also be covered with the user during the onboarding process. Each organization will have a different set of criteria that make up the onboarding procedures.
Eventually, employees will leave your organization. The offboarding procedure ensures that information access is terminated when the employment of the user is terminated. The offboarding procedure will be initiated by the HR department and should be immediately performed by the IT department. This process can be automated via the organization's employee‐management system. This procedure can also be performed manually if the employee‐management system is not automated. However, the procedure must be performed promptly, since the access to the company's information systems is the responsibility of the IT department.
During the offboarding procedure, email access is removed via the mobile device management (MDM) software, the user account is disabled, and IT should make sure that the user is not connected to the IT systems remotely. The offboarding procedure may also specify that the user assume ownership of the terminated employee's voicemail, email, and files.
The traditional workforce is becoming a mobile workforce, with employees working from home, on the go, and in the office. These employees use laptops, tablets, and smartphones to connect their companies' cloud resources. Organizations have embraced BYOD initiatives as a strategy to alleviate the capital expense of equipment by allowing employees to use devices they already own.
Because employees are supplying their own devices, a formal document called the BYOD policy should be drafted. The BYOD policy defines a set of minimum requirements for the devices, such as size and type, operating system, connectivity, antivirus, patches, and many other requirements the organization will deem necessary, as well as the level of service your IT department will support for personally owned equipment.
Many organizations use MDM software, which helps enforce the requirements for the BYOD policy. MDM software helps organizations protect their data on devices that are personally owned by the employees. When employees are terminated or a device is lost, the MDM software allows a secure remote wipe of the company's data on the device. The MDM software can also set policies requiring passwords on the device. All of these requirements should be defined in the organization's BYOD policy.
Corporate‐owned mobile devices are also of paramount concern when it comes to security. The equipment is mobile, so these devices sometimes travel and can disappear completely. Luckily, MDM software allows you to not only control the data on these devices, but in many instances even track it via a built‐in Global Positioning System (GPS) sensor (if the device supports this functionality). When MDM software is implemented in the capacity of tracing assets, consideration must be given to privacy. Although it may be acceptable to track a corporate‐owned mobile device, the end user must be made aware of this policy.
When you implement an MDM solution to manage mobile devices, part of your implementation is the creation of profile security requirements for the mobile devices you will manage. The profile security requirements allow the management of the mobile devices in a uniform fashion. As an administrator, you can choose settings for mobile devices under your purview and enforce profile security requirements in various ways. In a given scenario, you may want to enforce settings for the entire organization, whereas in other scenarios you may want to differ the settings based on organizational unit, role, or other group type. Among the settings you may want to enforce are those requiring the encryption of drives and the use of complex passwords.
The Internet of Things (IoT) is an exploding industry. You can now control everything from your lights to your thermostat with a mobile device or computer. The downside to connecting things to the Internet is that they must be patched so that they are not exploitable. Hardware vendors often wire off‐the‐shelf components into the hardware, and these off‐the‐shelf components never get frequent updates. So, this is a major security consideration for an organization. Ease of use versus security is the balance beam that you walk when owning IoT devices.
In recent years, attackers have harvested IoT devices for distributed denial‐of‐service (DDoS) attacks. The Mirai botnet is one such popular botnet that can crush an organization's bandwidth. This can happen when IoT devices in your network are used in a DDoS or are attacked by the botnet. To mitigate inadvertently being used in a DDoS, you can place the IoT devices on an isolated network and police their outbound bandwidth.
Unfortunately, there is not much you can do with IoT devices to prevent being attacked. If an attacker wants to attack your organization with IoT devices, firewalls, ISP controls, and third‐party services like Cloudflare can help mitigate these attacks. This is not really an IoT consideration because any botnet can attack you and the mitigations are the same as if you were attacked by an IoT botnet.
In this chapter, you learned some best practices related to operating system security. We then focused on Windows operating system security settings, such as sensitive accounts and filesystem permissions. We concluded with mobile device security for both Apple devices and Google Android. The mobile device operating systems are changing rapidly, as both consumer and enterprise needs change. However, the objectives on the A+ 220–1102 exam are general enough that a mastery of them will enable you to secure a mobile device today and into the future.
Security, as you've already guessed, is a large part of the CompTIA A+ certification. CompTIA expects everyone who is A+ certified to understand security‐related best practices and be able to secure both Windows operating systems and mobile devices.
The answers to the chapter review questions can be found in Appendix A.
.sdk
.apk
.ipa
.exe
You will encounter performance‐based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
You have been asked to create a working structure for your organization's network. You have three groups: Sales, Marketing, and R&D. You need to set up a network share and NTFS to allow Sales to access Marketing material but not modify it in any way. R&D must be able to write to marketing files and read Sales information. Marketing must only have read access to R&D and Sales. Each group should have the Modify permission to their respective folder. All permissions should be controlled with share permissions. How will you set up the folders for access, NTFS permissions, and share permissions?
THE FOLLOWING COMPTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Troubleshooting is a major responsibility of an A+ technician's daily job. It may not be as glamorous as we'd like it to be, but it does make up a good percentage of our daily workload. Applying a systematic approach to software troubleshooting is the key to solving all problems. A systematic solution also works well in preventing problems in the first place.
Many of the common software problems that you will spend time solving can be prevented with proper preventive maintenance. Preventive maintenance tends to get neglected at many companies because technicians are too busy fixing problems. Spending some time on keeping those problems from occurring is a good investment of resources.
In this chapter, we'll look at applying the same troubleshooting methodology to common software problems. We'll also apply similar troubleshooting to security issues. First, we'll look at common symptoms of problems and their solutions. We'll then follow up with ways to deal with—and prevent—security‐related issues.
Windows is mind‐bogglingly complex. Other operating systems are complex too, but the mere fact that Windows has nearly 60 million lines of code (and thousands of developers have worked on it!) makes you pause and shake your head. Fortunately, you just need to take a systematic approach to solving software‐related issues.
Windows‐based issues can be grouped into several categories based on their cause, such as boot problems, missing files (such as system files), configuration files, and virtual memory. If you're troubleshooting a boot problem, it's imperative that you understand the Windows boot process. Some common Windows problems don't fall into any category other than “common Windows problems.” We cover those in the following sections, followed by a discussion of the tools that can be used to fix them.
There are numerous “common symptoms” that CompTIA asks you be familiar with for the exam. They range from the dreaded Blue Screen of Death (BSOD) to spontaneous restarts and everything in between. They are discussed here in the order in which they appear in the objectives list.
The performance of your systems will inevitably slow down over time. This could be due to a multitude of causes, ranging from bad Windows Update patches to malware. Sluggish or slow performance is one of the hardest problems to solve on a Windows operating system, because many of the symptoms are related to each other.
The first step to solving the problem is identifying the component that is impacted by the performance issue. The following is a list of critical components that can be affected by slow performance:
As you can see from the list of possibly affected components, many of the symptoms are closely related, such as RAM, CPU, and disk. The excessive usage of RAM can create performance symptoms with your hard disk drive. If left for a long period of time, these can both lead to an increase in CPU activity.
There are several tools that you can use to identify the problem area so that you can focus your attention on narrowing down the problem. The first tool you should start up is the Task Manager, as shown in Figure 19.1. You can launch Task Manager in several different ways, such as right‐clicking the Start menu and selecting Task Manager, right‐clicking the taskbar and selecting Task Manager, or (my personal favorite) pressing Ctrl+Shift+Esc. The Performance tab will show you four of the five critical areas (detailed previously) on the left side. In this example, you can see that the processor is spiked out at almost 100 percent and all other systems are within tolerance.
FIGURE 19.1 The Performance tab in Task Manager
Now that you've isolated the problem to the critical area of CPU, you can narrow it down further by looking at the Processes tab, as shown in Figure 19.2. You can see that Microsoft Windows Malicious Software Removal Process is using nearly 26 percent of the CPU. The other 74 percent is most likely distributed among other processes. By clicking the core area headings of CPU, Memory, Disk, and Network, you can sort usage from high to low or from low to high. In this particular instance, the operating system was caught booting up, so that particular process was displaying high CPU.
FIGURE 19.2 The Processes tab in Task Manager
FIGURE 19.3 The Details tab in Task Manager
Using Resource Monitor, you can get a much more detailed view than what is displayed in Task Manager. You can open Resource Monitor with the shortcut on the lower left of the Performance tab in Task Manager, as shown in Figure 19.4. This tool allows you to read real‐time performance data on every process on the operating system. Resource Monitor also allows you to sort details, the same as Task Manager. You can click each critical area and drill down to the performance issue.
A unique feature of Resource Monitor is the visualization of data. When you select a process on the upper view, Resource Monitor automatically filters the activity of the critical area, as shown in Figure 19.5. As you can see in this example, the Edge browser processes have been selected and then the Network tab can be chosen to display the network activity and connections. The result is the isolation of network activity for this process. This can be done for any of the critical areas.
FIGURE 19.4 Resource Monitor
FIGURE 19.5 Selective isolation in Resource Monitor
Now that you've isolated the problem to an action or process in the operating system, you need to do the following:
The theory of probable cause may be that a hardware upgrade is required due to a new version of the application that demands more resources. Or it could be as simple as the job the application is running is higher than normal in load. Remember to question the obvious and do the simple things, such as rebooting, to see if the problem goes away. It is often joked about that problems always go away after a reboot. It's not too far from the truth. Sometimes a process is hung up and is affecting another process. A reboot sometimes fixes them both and the symptoms go away. More likely than not, the problem will still be there. This is when you need to start testing your theory of probable cause to determine the cause.
You might find that after running a large query the hard drive is extremely stressed. Your action plan might be to upgrade the hard drive or move the workload to a faster machine. In either case, you need to verify that the process is functioning the way the user expects it to.
If you determine that it's a certain report in the database, remind the user that the report takes time and maybe they should schedule it when the computer is not immediately needed. Or schedule the system for an upgrade of hardware to prevent these problems in the future.
Ultimately, you should document your finding so that other technicians do not waste time with the same issue. The more intricate the problem, the more time is wasted when you forget you've solved it already or don't remember the answer. Always document the actions taken, such as upgrades or changes to the process for the user. You should also note the outcome—whether it was successful or showed no immediate performance increase—so that if another technician is working on the same or a similar issue, they can gauge whether the solution is effective.
With the introduction of Windows Vista, the boot process had changed from prior operating systems of Windows XP, 2000, and NT 4.0. We've used this current boot process introduced with Windows Vista all the way to today with Windows 11. The current boot process allows for the adoption of UEFI firmware.
In order to troubleshoot a failure to boot, you need to understand the complete boot process, starting with either the BIOS or the UEFI. The process is slightly different depending on which firmware you have on the motherboard. However, the outcome is the same: the hardware hands control over to the operating system so that the operating system can boot.
The initial boot sequence from hardware control to software control is almost identical in both BIOS and UEFI firmware. UEFI firmware does give you many more options, because UEFI drivers can be loaded before control is handed over to the software. This allows UEFI to treat all locations containing an operating system the same. Up to the point at which the hardware hands control over to the software, there is no difference between a network boot and a hardware boot.
After control is handed over to the software, several files are used to complete the operating system bootup. The most important files are as follows:
BOOTMGR
)
bootstraps the system. In other words, this file starts the loading of
an operating system on the computer.winload.exe
winload.exe
is the program used to boot Windows. It loads the operating system
kernel (ntoskrnl.exe
).winresume.exe
If
the system is not starting fresh but resuming a previous session, then
winresume.exe
is called by
BOOTMGR
.ntoskrnl.exe
The
Windows OS kernel is the heart of the operating system. The kernel is
responsible for allowing applications shared access to the hardware
through drivers.ntbtlog.txt
The
Windows boot log stores a log of boot‐time events. It is not enabled by
default.SYSTEM
and
SYSTEM32
), such as the hardware abstraction layer
(hal.dll
), the Session Manager (smss.exe
), the
user session (winlogon.exe
), and the security subsystem
(lsass.exe
).Numerous other dynamic link library (DLL) files are also required,
but usually the lack of them or corruption of one of these files
produces a noncritical error, whereas the absence of
hal.dll
causes the system to be nonfunctional.
We'll now look at the complete Windows boot process. It's a long and complicated process, but keep in mind that these are complex operating systems, providing you with a lot more functionality than older versions of Windows:
BOOTMGR
. Information in the boot sector allows
the system to locate the system partition and to find and load into
memory the file located there.BOOTMGR
reads the boot
configuration data (BCD) to get a list of boot options for the next
step. The BCD contains multi‐boot information or options on how the boot
process should continue.BOOTMGR
then executes
winload.exe
. This switches the system from real mode (which
lacks multitasking, memory protection, and those things that make
Windows so great) to protected mode (which offers memory protection,
multitasking, and so on) and enables paging. Protected mode enables the
system to address all the available physical memory.winresume.exe
is responsible
for reading the hiberfil.sys
file into memory and passes
control to the kernel after this file is loaded.HKEY_LOCAL_MACHINE\SYSTEM
Registry hive and device drivers
are loaded. The drivers that load at this time serve as boot drivers,
using an initial value called a start value.Winlogon.exe
loads. At
this point, you are presented with the login screen. After you enter a
username and password, you're taken to the Windows desktop.Now that you understand the boot process, let's look at how you can
collect information to identify the problem. We'll consider this in two
parts: hardware and software. The hardware process begins with the POST,
and the software portion of the bootstrap begins with the
BOOTMGR
.
You can collect information from the BIOS/UEFI firmware boot with third‐party system event log (SEL) viewers. However, it is very unlikely that you have a failure to boot because of a BIOS/UEFI firmware issue. It's not impossible, but it is highly unlikely.
To collect information on the software portion of the boot process
loads, you can use boot logging. The ntbtlog.txt
file is
located at the base of the C:\Windows
folder, as shown in
Figure
19.6. Boot logging is off by default and needs to be turned on. To
enable boot logging, issue the command
bcdedit /set {
current
} bootlog Yes
.
You can also use the System Configuration utility
(msconfig.exe
) by selecting the Boot Log option on the Boot
tab, as shown in Figure 19.7. Because the BCD is read
by BOOTMGR
, this point of the boot process is where logging
would begin and the first entries would be the loading of the
kernel.
FIGURE 19.6 The
ntbtlog.txt
file
FIGURE 19.7 System Configuration options for boot logging
Chances are, if you're having trouble booting into Windows, you won't
be able to access the command prompt to issue bcdedit
commands, nor will you able to access msconfig.exe
. Not to
worry. You can still access logging by allowing the operating system to
fail two times in a row. The third time, the computer will boot into the
recovery console. From there, click Troubleshooting ➢ Advanced Options ➢
Startup Settings ➢ Restart. When the computer restarts, it will boot
into the Startup Settings menu, as shown in Figure
19.8. Of course, you won't be able to boot the computer to retrieve
the files, but you can use the command prompt in the Windows Recovery
Environment.
The idea is to collect information to identify the problem and, above
all, to fix the problem. Sometimes you need to let Windows repair
itself. The Windows Recovery Environment (WinRE) contains a Startup
Repair option. Using the Startup Repair option is similar to issuing
bootrec /rebuildbcd
at the command prompt, which will
rebuild the BCD.
If that fails, the ultimate solution might be to use the Reset This PC option in the Windows Recovery Environment or to install the operating system from scratch.
When it's reported that an operating system is missing, or “no OS is found,” the first thing to check is that no media is in the machine (USB, DVD, CD, and so on). The system may be reading this media during boot before accessing the hard drive. If that is the case, remove the media and reboot. You should also change the BIOS/UEFI settings to boot from the hard drive before any other media to prevent this issue in the future.
FIGURE 19.8 Startup Settings menu
If the problem is not as simple as removing the non‐bootable media, then you may have to boot into the Windows Recovery Environment. This may be a challenge, because if the BIOS/UEFI cannot boot to the Windows Boot Manager, then the Windows Recovery Environment cannot be executed. The Boot Manager is responsible for executing the Windows Recovery Environment. You will have possibly two options to fix this. The first option is to use the vendor's recovery console. This option is dependent on the vendor supplying a recovery console that can be accessed via the BIOS/UEFI; not every vendor supplies this tool. The second option is to boot the installation media and choose Repair when it first boots. Choosing this option will launch the Windows Recovery Environment booted from the installation media. You can then choose to repair the operating system by selecting Troubleshoot ➢ Advanced Options ➢ Startup Repair. The Windows Recovery Environment will then attempt to repair the operating system.
When an application crashes, you want to isolate the cause of the crash and solve it. The cause could be a compatibility issue, a hardware issue, or a host of other problems. One step to take early on is to look for updates/patches/fixes to the application released by the vendor. Be sure to try these updates on a test machine before rolling them out to all machines, and verify that they address the problem and do not introduce new problems.
One tool that is extremely helpful in identifying software problems is Reliability Monitor, as shown in Figure 19.9. Reliability Monitor allows you to see application crashes and the times and dates they occurred. It also allows you to see which updates were installed before and after the crashes. You can use Reliability Monitor to narrow down whether other software is causing the issues and what led up to the crashes.
FIGURE 19.9 Windows Reliability Monitor
In addition to Reliability Monitor, you can access the Windows event logs in Event Viewer for information about Microsoft‐based application problems, as shown in Figure 19.10. All third‐party vendors should log errors to the Windows event logs, but generally you will only find Microsoft products using these logs. In either case, you might find more information about why an application is crashing by looking at the Application log.
The applications on the operating system are not the only elements captured by Reliability Monitor. Reliability Monitor also captures the overall stability of the operating system. It will allow you to see every reboot caused by a problem and even blue screens. The overall stability of the operating system is drawn as a graphical line inside the Reliability Monitor application. This allows you to historically look back and trace when a problem started.
FIGURE 19.10 Windows event logs in Event Viewer
In Exercise 19.1 you will review the Reliability Monitor built into Windows.
The Blue Screen of Death (BSOD)—not a technical term, by the way—is another way of describing the blue‐screen error condition that occurs when Windows fails to boot properly or quits unexpectedly, as shown in Figure 19.11. If this happens during a boot, it is at this stage that the device drivers for the various pieces of hardware are installed/loaded. If your Windows GUI fails to start properly, more likely than not the problem is related to a misconfigured driver or misconfigured hardware.
FIGURE 19.11 Blue Screen of Death
You can try a few things if you believe that a driver is causing the problem. One is to try booting Windows into safe mode, which you can access via the Startup Settings in the Windows Recovery Environment. In safe mode, Windows loads only basic drivers, such as a standard VGA video driver and the keyboard and mouse. After you've booted into safe mode, you can uninstall the driver that you think is causing the problem.
Another option is to boot into the Windows Recovery Environment and use System Restore, which will revert the system drivers back to the state they were in when the restore point was created. Bear in mind that a System Restore will not affect personal files, but it will remove applications, updates, and drivers.
In Windows 7 and prior operating systems, you can enter the Advanced Boot Options during system startup by pressing the F8 key. The Advanced Boot Options menu contains an option called Last Known Good Configuration. This option will allow you to boot to the last time you had successfully started up and logged in. This option was removed in Windows 8/8.1/10 and Windows 11. You should now be using System Restore. The Windows Recovery Environment will automatically launch if there are two failed attempts to boot the operating system in 2 minutes.
There are many reasons why your computer might randomly turn off or shut down without warning. The problem is almost always related to faulty hardware or a faulty driver, but sometimes it can be as simple as tweaking your advanced power settings.
The first place to check is Event Viewer on the System tab. You should start combing through the logs and looking for the source of Kernel‐Boot or Kernel‐General. The entries will help you identify if the operating system was shut down properly or it just suddenly lost power. Any time the operating system is shut down, the kernel will log an entry, and when it is powered back up, it will also log an entry. In addition to the Kernel‐Boot and Kernel‐General sources, you should also investigate the source of EventLog. These entries will be created if the EventLog service detects a dirty shutdown, such as when the power is removed. By checking the Event Viewer logs, you identify whether the problem is a hardware problem or whether the operating system is actually shutting itself down.
If you identify that the problem is a hardware issue, then the first step to resolving the problem is updating drivers. You should remove any autodetected drivers and reinstall the vendor's driver for the specific hardware. If the problem persists, then swapping known good hardware might help narrow down the issue.
If you determine that the operating system is shutting itself down, then the power settings should be checked. You can access the power settings by navigating to the Start menu ➢ Settings App ➢ Power & Sleep. Make sure that the PC is not going into sleep mode when plugged in. From this screen you can access the advanced power settings by clicking Change Plan Settings, then Change Advanced Power Settings. You will need to selectively tweak some of the settings and test your adjustments, such as turning off hard disk sleep, general sleep settings, and processor power management, to name a few. Tweak the timer or adjustment and then monitor the change from the setting to determine the fix.
A service's failure to start is directly related either to another application installed with conflicting resources or to a misconfiguration of the service. In either case, the first place to start is the Event Viewer, as shown in Figure 19.12. The System log will display an Event ID of 7000 from the source of the Service Control Manager. The reason for the failure will vary, depending on the problem.
If a service is conflicting with another resource, we recommend that you reinstall the software that installed the service that is failing. Although this might break the conflicting application, it is probably the quickest way to find a conflicting resource.
FIGURE 19.12 Service Control Manager events
If the service fails to start because of a misconfiguration, the most likely cause is the user account the service is configured to start with. If a misconfigured user account is the problem, you will see an Event ID of 7000 in Event Viewer, and the description will read that the service failed due to a login failure. You can verify the user configured to start the service in the Services properties, as shown in Figure 19.13. You open the properties of the service by right‐clicking the Start menu, selecting Computer Management, then Services, right‐clicking the service, selecting Properties, and finally selecting the Log On tab.
Make sure that the password for the user account has not changed and that the user account is not locked out. You can manually reset the password for the user and reenter the password in the Services properties. Also make sure that the account has the Log On As A Service right.
FIGURE 19.13 Services properties
Random access memory (RAM) is the computer's physical memory. The more RAM you physically have installed in a computer, the more items you can have in the running foreground of the operating system. If you run out of physical memory, then processes that are backgrounded (minimized) will be loaded into the page file, or paging file, on the hard drive. This is called paging, and it is totally normal to have a certain amount of paging happen during normal activity. The page file is actually hard drive space into which idle pieces of programs are placed while other active parts of programs are kept in or swapped into main memory. The programs running in Windows believe that their information is still in RAM, but Windows has moved the data into near‐line storage on the solid‐state drive (SSD) or hard drive. When the application needs the information again, it is swapped back into RAM so that the processor can use it.
When system processes are at risk of not having enough memory free, you will see a warning message similar to the one shown in Figure 19.14. When this happens it means one of two things; the first is that you simply don't have enough physical RAM in the computer. The second is that a process is using a large amount of RAM that it normally doesn't need. The operating system is letting you know that it can't swap out any more pages of memory to the page file (virtual memory).
FIGURE 19.14 Low memory warning
The larger the page file, the fewer times the machine has swapped out the contents of what it is holding in memory. The maximum possible size of your page file depends on the amount of disk space that you have available on the drive where the page file is placed. Windows configures the minimum and maximum page file size automatically. If you want Windows to handle the size of the page file dynamically, you have to change the default setting by selecting System Managed Size in the Virtual Memory dialog box. We'll show you how to get there in a moment.
In Windows, the page file is called pagefile.sys
and is
located in the root directory of the drive on which you installed the OS
files. The page file is a hidden file; to see the file in Windows File
Explorer, you must have the Folder options configured to show hidden
files. Typically, there's no reason to view the page file in the
filesystem, because you'll use Control Panel to configure it. However,
you may want to check its size, and in that case, you'd use Windows File
Explorer.
To modify the default virtual memory settings, follow these steps: Click Start, type Control Panel, and select it from the results. Click the System icon and select Advanced System Settings from the right panel. In the Performance area, click Settings. Next, click the Advanced tab (yes, another Advanced tab), and then, in the Virtual Memory area, click Change and the Virtual Memory dialog box will open, as shown in Figure 19.15. Note that in addition to changing the page file's size and how Windows handles it, you can specify the drive on which you want to place the file.
FIGURE 19.15 Windows Virtual Memory settings
The USB controller is a hardware component on the motherboard that supplies both power and a data path for the devices connected. It is possible to plug in too many devices and overload the power the port can handle. Most USB 2.0 ports can handle five concurrent loads of 100 mA each, for a total of 500 mA. USB 3.0 can handle six concurrent loads of 150 mA each, for a total of 900 mA. If a device connected draws more than the allotted power, it will malfunction or irradicably disconnect.
The USB controller is also responsible for allotting the number of endpoints for the purpose of accepting data. The endpoints are equivalent to the number of lanes on a highway for cars (data) to travel at any given moment. If you plug in too many devices you can request more ports than are allotted for the USB controller and you will get an error, similar to Figure 19.16.
FIGURE 19.16 USB controller error
The easiest way to fix this issue is to move some USB devices around on the USB ports. You should move any devices that don't need USB 3.0 to USB 2.0 ports, such as keyboards and mice. Then ensure that your devices that require speed support are connected to USB 3.0 ports. If you are using a USB hub, make sure that you have connected it to a similar port. For example, if you are using a USB 2.0 hub, ensure it is connected to a USB 2.0 port and not a USB 3.0 port. More complex solutions are upgrading the driver to the latest driver the vendor supplies for the USB controller or just simply upgrading the hardware to a newer chipset.
A local profile is a group of settings for the user as well as their personal files. Local profiles can be slow to load because of items set to start when the profile is loaded. You can use Task Manager to selectively disable startup items, as shown in Figure 19.17. By the process of elimination and after several logouts and logins, you can narrow down the performance problem caused by slow‐loading local profiles.
FIGURE 19.17 Startup items in Task Manager
Typically, local profiles will not slow down login tremendously. Roaming profiles that are located on a server typically cause slow loading of the profile. The local profiles don't need to traverse a network during login. Roaming profiles, on the other hand, need to traverse the network during login (load from the server) and logout (write back to the server).
There are some things you can do to alleviate the stress on the network and speed up the load time of network profiles. For example, you can save space by deleting temporary Internet files in both Edge and Internet Explorer. You can also save a tremendous amount of space—sometimes gigabytes—by deleting downloaded files. In addition to space traversing the network, login scripts, Group Policy processing, and services starting upon login can also contribute to slow‐loading profiles.
The real‐time clock (RTC) on the motherboard is responsible for maintaining the correct time. The RTC can drift over time and the computer can become faster or slower. When an operating system is running on a hypervisor, the problem is increased tenfold, since the RTC is actually emulated by the hypervisor. When the time drifts too far, you can have authentication problems. If the time drifts too far, certificates can also be invalidated and you'll have problems with web browsers.
Fortunately, the Windows operating system has addressed the problems
of time drift by periodically querying a Network Time Protocol (NTP)
server. You will need to ensure that the client has the ability to
contact the time server of time.windows.com
, or you
will need to configure a time server the client can reach.
You can verify that the NTP server is reachable by opening the Date & Time Control Panel applet and trying to update time. This can be performed by clicking the Start menu ➢ Windows System ➢ Control Panel ➢ Date & Time, then selecting the Internet Time tab and clicking Change Settings. Be sure that the Synchronize With An Internet Time Server option is selected. Then click Update Now. The operating system will attempt to call out to the NTP server and the results will be displayed in the dialog box, as shown in Figure 19.18.
FIGURE 19.18 Configuration of time
Now that we've covered common symptoms of Microsoft Windows OS problems, let's look at common solutions that you can implement to solve those problems.
Rebooting a system often takes care of problems, for a multitude of reasons. One of the top reasons is that it allows the operating system to terminate hung processes gracefully. After the operating system reboots, the processes are normally restarted. An added bonus is that only the applications the user requires are relaunched. The goal is to fix the user's problem with minimal disruption. A reboot can't be done every time, especially if the user could lose work as a result.
Depending on the circumstances, when reproducing a problem, one of the first things to do is reboot. It serves an important purpose of isolating the problem so that you can reproduce it. For example, if you've isolated the problem to Excel not scrolling properly when a web browser is open, you should reboot and try to replicate the problem. If you discover the reboot fixed the problem, then you've solved the problem. However, if the problem still exists, you've now isolated the problem further by eliminating other programs that could have been hung in the background affecting this problem. The steps of rebooting and then opening Excel can also be used to verify when you've solved the problem.
Services normally don't need to be restarted. On occasion, however, a change is made that requires that a service be restarted to reflect the change. Services should be restarted if they crash, of course. Although this is rare, it still happens from time to time. If a service crashes, you can restart the service in the Computer Management MMC by selecting Services, then right‐clicking Service and choosing Start, as shown in Figure 19.19. You can use the same method to restart a running service.
FIGURE 19.19 Manually starting a service
Services can be configured to automatically start in the event of failure on the Recovery tab of the Services properties, as shown in Figure 19.20. For example, by default the Print Spooler service is set to restart on the first and second failure, but after that it will remain stopped. The reset counter can be set for a number of days, and the service can be started after a specific number of minutes after its failure. You can even have the computer restart or run a program in the event a service fails.
FIGURE 19.20 Service recovery
If an application is crashing and acting erratically, it may be due to another application that has overwritten critical files used by the application, or the files may have become corrupted. In either case, choosing to repair an application will validate that it is installed properly and the process will replace any missing critical files for the application. Data files and configuration files will not be touched while the application is being repaired; only critical files (such as DLLs) will be checked and repaired.
You can repair an application by right‐clicking the Start menu, selecting Apps And Features, then Programs And Features (under the Related Settings heading), right‐clicking the application, and then selecting Repair, as shown in Figure 19.21. The application's installer will launch and start to repair the application.
If a repair does not fix the application, then you should perform a
complete uninstall and reinstallation of the application. When you
uninstall the application, the uninstaller should remove configuration
files that could be causing the issue. Some applications require you to
manually remove the configuration files, which are often in a folder of
your profile, such as AppData
. You should always contact
the vendor for the specific folders to remove to ensure the application
has been completely uninstalled. Once you've completely uninstalled the
application, you can reinstall the application and test to see if the
problem is fixed.
FIGURE 19.21 Repairing an application
If after a repair or uninstall and reinstall, the application is still broken, it could be caused by a bug. You can update the application to get the latest fixes. However, it is always recommended to identify the problem and then cross‐reference it with the vendor's change log for the application.
Applications require a certain amount of RAM, storage space, and CPU speed. Some applications may also require an SSD hard drive, or a GPU with a specific speed and amount of VRAM. When searching for a solution for a problem, you should first verify the requirements for the application based on the vendor's requirements. This will establish an expectation of performance for the application on the given hardware. If the requirement is higher than the given hardware, then you will need to scale the hardware up by adding resources.
Although the hardware might meet the requirements of the application, most often the application is not the only application running on the hardware. This must be taken into consideration when trying to solve a problem.
The performance of the application might also need to be addressed. Often vendors will publish the minimum specification for the application requirements. However, when you speak with support their recommendations for your workload might be very different. This is another aspect of the solution that must be taken into consideration.
Computers are built with a finite amount of resources, such as RAM, CPU, and storage. After reviewing the application requirements and weighing the considerations of application coexistence with other applications and the current load of the application in question, you may decide to add more resources. Adding resources such as RAM, CPU, and storage is considered to be scaling up the hardware. Before adding resources, you should document the performance and the utilization of the resources. Then after you've added the resources, you should compare the current performance and utilization of resources by the application.
When a feature of the operating system stops functioning or errs in a
manner that makes you suspect corruption, the System File Checker tool
can scan and replace critical files. The System File Checker is launched
from the command line with the command sfc.exe
and performs
a myriad of functions. For example, you can execute the command
sfc.exe /verifyonly
and the System File Checker will
inspect all the critical files and verify integrity only. You can also
supply the command sfc.exe /scannow
and the tool will scan
and repair any files that fail the integrity check. You can perform the
same task on individual files such as kernel32.dll
. The
System File Checker tool will also allow offline repair and checks.
Using the System File Checker tool on the operating system to be
inspected is pretty straightforward, as described previously. However,
if the operating system can't boot, you can use the command in offline
mode as well. You simply boot the Windows Recovery Environment locally
from installation media or some other source. Once the Windows Recovery
Environment is booted, select Advanced Options and open a Command Prompt
as Administrator. Then enter
sfc.exe /scannow /offbootdir=C:\ /offwindir=C:\Windows
,
assuming that Windows is installed on the C: drive.
As previously discussed, the System File Checker utility will verify the integrity and replace any corrupted files. However, the utility will only replace files if they fail an integrity check. An alternate method to ensure the operating system is properly installed is to perform a repair installation of Windows. The repair installation will reinstall all files from source media regardless of their integrity. The repair installation will leave all applications and user files in place.
To initiate a repair installation of Windows, you will need to first
download a copy of Windows. The easiest way to download Windows is to
use the installation media creation tool. You can download the media to
a USB flash drive or to an ISO file. If you download the media to a USB
flash drive, then all you must do to start the process is launch
setup.exe
and choose to keep all apps and files. If you
download an ISO file, you will need to mount
the ISO by double‐clicking the file. You can then start
setup.exe
and follow the prompts, choosing to keep all apps
and files. Either option will begin the reinstallation of the operating
system, as shown in Figure 19.22.
FIGURE 19.22 Reinstallation of Windows
Almost everyone, no matter how hard they've tried to keep their computer running properly, will experience a computer crash at some point. Many of the ways to get your computer back up and running (such as reinstalling the operating system) take a lot of time. In Windows, System Restore allows you to create restore points to make recovery of the operating system easier.
A restore point is a copy of your system configuration at a given point in time. Restore points are created one of three ways:
Restore points are useful for when Windows fails to boot but the computer appears to be fine otherwise, or if Windows doesn't seem to be acting right and you think it was because of a recent configuration change.
It's important to note that in Windows 10/11, the automatic system recovery option is disabled by default. It must be turned on manually. We will cover how to do so in Exercise 19.2.
Microsoft is moving toward unifying all settings under the Settings app for Windows 10/11. This differs from prior operating systems and the legacy Control Panel app. System Restore is one of those settings that can be opened only from the legacy Control Panel app. To open System Restore, click Start, type Control Panel and select it from the results, and then click System And Security. This will open to a list of Control Panel choices. Select Security And Maintenance, and then Recovery. A screen like the one shown in Figure 19.23 will appear, and you can select from one of several tasks.
FIGURE 19.23 Windows Advanced Recovery Tools
The Advanced Recovery Tools can be used to configure System Restore settings. You can also get to Advanced Recovery Tools by opening the System Control Panel (right‐click Computer and choose Properties) and selecting the System Protection tab.
The other option is to select how much disk space is available for System Restore. The less disk space you make available, the fewer restore points you will be able to retain. If you have multiple hard drives, you can allocate a different amount of space to each drive.
Exercise 19.2 demonstrates how to create a restore point manually in Windows.
In certain situations, a problem may require you to reinstall software. The time required to uninstall and reinstall the software can sometimes exceed the time it takes to reimage the operating system with the software preinstalled. Reimaging the computer will depend on whether you use operating system images or load each computer by hand.
If your organization does not use a standardized image for its computers, you can use the Windows Recovery Environment and select the Reset This PC option. If the computers have a preinstalled image, you can use the System Image Recovery option to reload the operating system. You can select this option by holding down the Shift key as you reboot the operating system, then choosing Advanced Options after the reboot, then selecting System Image Recovery. You can also reset the computer by opening the Settings app, clicking Update & Security and then Recovery, then selecting Reset This PC (Get Started), and finally choosing Remove Everything. Depending on the type of computer you have, it may have a proprietary process for recovering system images.
Occasionally, applying an update will fix a problem, mainly because that is what updates do: they fix problems. Once you've identified that applying an update is the solution, you need to download, distribute, and install the update. Luckily, by default Windows 10/11 automatically installs updates for the operating system to keep you up to date and problem‐free.
In large‐scale networks, the organization may employ a corporate patch‐management solution. Microsoft offers a free patch‐management solution called Windows Server Update Services (WSUS). Microsoft also sells a licensed solution called Microsoft Endpoint Configuration Manager (MECM), which performs many other functions in addition to patch management. If an update is required and your organization uses one of these products, the patch must be approved, downloaded, and deployed. Third‐party patch‐management solutions may also be used in your organization. Third‐party solutions are usually specific to an application or suite of applications, such as Adobe or Autodesk.
In small office, home office (SOHO) environments and small network environments, the update may be a one‐off installation for a specific application. In this case, the update just needs to be downloaded and installed, per the vendor instructions. Always make sure to have a plan to roll back from a bad update. Turning on System Protection before the update is a good idea. If an update fails, you can simply use System Restore to restore the operating system to a prior point in time.
Very rarely you will find that a Microsoft or third‐party update has created a problem on the operating system. When this happens, it's pretty easy to roll back updates by uninstalling them. Simply open the Settings app, select Update & Security, then View Update History, then Uninstall Updates, and finally select the update and choose Uninstall, as shown in Figure 19.24.
FIGURE 19.24 Uninstalling an update
On the left of the Installed Updates screen, you can select Uninstall A Program. This will take you to the Programs And Features – Uninstall Or Change A Program screen. From here, you can uninstall third‐party updates. After uninstalling an update, it's a good idea to reboot before testing to see if it fixed the issue.
When you isolate a hardware problem to a faulty device driver, it is sometimes necessary to roll back the current driver to a prior version. This action will roll back the driver to the original version detected by Windows, also called the out‐of‐box driver. In some cases, it may roll back to a generic driver, which reduces functionality until a proper driver is installed.
This process can be completed with these steps:
When the rollback is complete, you should reboot the computer before testing to see if it fixed the issue.
When a problem has been determined to be a profile‐related issue, it is necessary to reset the Windows profile. When performing this action, ensure that the user's data is backed up. It is best to keep an entire copy of the profile before resetting it. The following are the most common places data is kept by the operating system:
FIGURE 19.25 Rolling back a driver
To back up a local profile, log into an administrative account (other
than the one you are backing up), and then copy the profile under
C:\Users
to a new location. Do not move the profile,
because the operating system references it in the Registry.
You can then reset a local profile on the Advanced tab of System Properties, as shown in Figure 19.26. You can access the User Profiles dialog box by following these steps:
The user's files can then be manually copied over. The Profile dialog box also allows you to view the overall size of a user's local or remote profile, so it also helps in the troubleshooting process.
FIGURE 19.26 Deleting a local user profile
You can also use that procedure to delete a roaming user profile that has been left on the Windows operating system. However, performing the procedure on a roaming profile will not reset the profile. You will only remove the profile to clear space. To reset a network‐based roaming profile, perform the following steps:
There are a number of topics CompTIA expects you to know for the 220–1102 exam as it pertains to security issues. Many of these issues also appear in other CompTIA certification exams, such as Security+ and other exams that have a security component. Rest assured that for the 220–1102 exam, you do not need to know the depth of content as if you were preparing for the Security+ exam. However, you should be familiar with the following symptoms when determining if a security‐related problem has occurred:
This list is by no means exhaustive. What is an absolute is the fact that you should immediately rectify the security issue or quarantine the system if you experience even one of these symptoms. In the following sections we will cover the aforementioned security‐related symptoms, as well as explore some causes for these symptoms.
The operating system is by far the largest attack vector for threat agents. The software installed on the operating system and the files that the operating system stores are the perfect mixture of targets. Threat agents can target an unpatched application that contains a vulnerability. A threat agent can also sneak a file that is infected with malware onto the operating system. In the following section we will identify symptoms of common security issues along with their possible causes.
The solutions for the symptoms vary significantly. However, the overall goal is to keep your operating system protected. Protecting the operating system can be achieved by keeping the OS current on all patches. The applications that are installed on the operating system should be kept current with patches as well. Antimalware software should be installed, and best practices should be employed, as previously discussed in past chapters.
If your computer is hooked up to a network, you need to know when your computer is not functioning properly on the network and what to do about it. In most cases, the problem can be attributed to either a malfunctioning network interface card (NIC) or improperly installed network software. The biggest indicator in Windows that some component of the network software is nonfunctional is that you can't log in to the network or access any network service. To fix this problem, you must first fix the underlying hardware problem (if one exists) and then properly install or configure the network software.
In some situations, network connectivity issues can be related to security threats. Although you might not seem to have network connectivity, the NIC might be working fine, while the real problem is a malicious program that has crashed or that it is not operating as the creator of the malicious program intends. The malware will act as a proxy for the network traffic. This type of malware is usually intent on stealing credentials or banking information. However, it can also be used to inject ads and cause browser redirection.
Not all malware that causes network connectivity issues acts as a proxy. Some malware changes network settings, such as your DNS servers. This type of malware will cause browser redirections by controlling what you resolve through its DNS. It is also common for malware to change your system proxy so that all requests go through the threat agent's remote proxy.
Users have plenty of real viruses and other issues to worry about, yet some people find it entertaining to issue phony threats disguised as security alerts to keep people on their toes. Some of the more popular hoaxes that have been passed around are the Goodtimes and the Irina viruses. Millions of users received emails about these two viruses, and the symptoms sounded awful. The mention of these two hoaxes serves to outline the most well‐known hoaxes. Since these came out, there have been many different hoaxes, most of which were not as well known as these.
Both of these warnings claimed that the viruses would do things that
are impossible to accomplish with a virus. When you receive a virus
warning, you can verify its authenticity by looking on the website of
the antivirus software you use, or you can go to several public systems.
One of the most helpful sites to visit to get the status of the latest
viruses is the website for the CERT organization (www.cert.org
). CERT monitors
and tracks viruses and provides regular reports on this site.
When you receive an email that you suspect is a hoax, check the CERT site before forwarding the message to anyone else. The creator of the hoax wants to create widespread panic, and if you blindly forward the message to coworkers and acquaintances, you're helping the creator accomplish this task. For example, any email that includes “forward to all your friends” is a candidate for research. Disregarding the hoax allows it to die a quick death and keeps users focused on productive tasks. Any concept that spreads quickly through the Internet is referred to as a meme.
A desktop alert is a notification or dialog box that is crafted to look like it was generated by the operating system. This is a crafty way of social engineering the user into becoming a victim. The malware is crafted to generate a pop‐up box that states there is a security error detected and that you should call Microsoft or Windows Support right away, as shown in Figure 19.27. When you call the number given, you are calling scammers who will try to sell you software you don't need.
FIGURE 19.27 Malware‐generated call‐in alert
Social engineering is not the only method of a threat agent. The threat agent can generate realistic operating system dialog boxes that coax you into downloading and installing malware, such as shown in Figure 19.28. The average person might just figure it's time to update their software—it even states that, by downloading the software (malware), you agree to the EULA.
FIGURE 19.28 Malware‐generated download alert
Another really popular method of distributing malware is by using browser push notification messages. The user will browse to a malicious site and then the user will be coaxed into allowing push notifications for the site. Once this is allowed, the site can push notifications to the operating system and spawn a notification that looks like it's coming from the operating system. Use of the operating system notifications is a well‐known attack aimed at coaxing the user into installing malware or pushing advertising to the user.
In some cases, the initial installation of the malware is prompted by a browser push notification. After the user installs the malware, it might start prompting deals of the day or other ads. This type of malware is considered adware, and it is becoming rare compared to other types of malware covered in this chapter.
Outside of user education, antimalware software can be used to prevent this type of threat. However, user education is much more effective. A routine review of sites allowed to send notifications should be performed periodically. A routine review of installed applications should be performed as well to ensure that malware has not been installed at some point.
One clever way of spreading a virus is to disguise it so that it looks like an antivirus program. When it alerts the user to a fictitious problem, the user then begins interacting with the program and allowing the rogue program to do all sorts of damage. One of the trickiest things for threat agents to do is to make the program look as if it came from a trusted source—such as Microsoft—and mimic the Windows Notification Center interface enough to fool an unsuspecting user. The notification might show that a new download or update is waiting for you to install it. It may even notify you that your antivirus software is disabled and needs attention.
Education is the only way to combat rogue antivirus. You should arm yourself with the knowledge of current antivirus programs. You can achieve this knowledge by reading consumer articles on the latest and greatest antivirus and antimalware applications. You should also pass that education to others in your family and organization. This can easily be achieved by detailing which antivirus and antimalware you have installed. Then there will be no confusion when a notification or pop‐up occurs stating it's time to install the rogue antivirus malware.
Threat actors that create malware have a number of methods by which they can wreak havoc on a system. One of the simplest ways is to delete key system files and replace them with malicious copies. When this occurs, the user can no longer perform the operation associated with the file, such as printing, saving, and so on. When malware is embedded on an operating system and gains privilege level access, it is known as a root kit. Once the operating system is infected with the malware, the threat actors will comb through files looking for sensitive information that they can ransom.
Just as harmful as deleting files, the malware can rename the files or change the permissions associated with them. This could prevent the user from accessing the files or even copying them off to an uninfected system. When an operating system is infected with ransomware the malware will encrypt the files. The files might also disappear from the user's normal view and the ransom request may be placed in the parent directory. The mode of operation for most ransomware is to rename the files with a unique extension as the malware encrypts the files. The ransom note is then placed in every folder so the user has instructions on payment for decrypting the files.
Starting with Windows Vista, Microsoft enabled the User Account Control (UAC) by default. This change to the operating system greatly reduced the number of attempts to use elevated privileges and definitely made it more difficult to change system files. In addition to enabling the UAC, Microsoft removed the Modify NTFS permission from system files for the Administrator account. Only the Trusted Installer (Windows Update) has access to modify these files; even the System (operating system) permissions are Read and Execute. If that wasn't enough, a self‐healing service watches for files changed and replaces them with trusted versions. The System File Checker (SFC) is a user tool that can be used to manually heal missing or modified system files. Malware can maliciously modify files and, in some cases, cause them to go missing. The System File Checker was covered in Chapter 15, “Windows 10 Administration,” and was also discussed earlier in this chapter.
Failed updates for Windows—assuming they aren't caused by connectivity issues—can often be traced to misconfigured settings. These settings can also cause the operating system to report that an update needs to be installed when it has already been installed. The best solution is to find the error code being reported in Windows Update Troubleshooter, solve the problem, and download the update.
Recent versions of Windows 10/11 include a troubleshooting utility. To access the utility, click the Start menu and select the Settings app. Once the Setting app opens, select Update & Security, then click Troubleshoot, then Additional Troubleshooters, and finally choose Windows Update.
Microsoft has also published some common problems that should be
checked. You can access the document at https://support.microsoft.com/en-us/windows/troubleshoot-problems-updating-windows-188c2b0f-10a7-d72f-65b8-32d177eb136c
.
The solutions Microsoft recommends in the documents are:
The web browser is the most used application on the operating system. It's so popular that Google has made an operating system around their Chrome web browser called the ChromeOS. On Windows, the web browser is just an application like any other application on the operating system. However, it is the easiest way a threat agent can access your operating system. Therefore, in this section we will cover some of the most common symptoms you may observe related to the web browser.
Pop‐ups (also commonly known as popups) are both frustrating and chancy. When a user visits a website and another instance (either another tab or another browser window) opens in the foreground, it is called a pop‐up; if it opens in the background, it is called a pop‐under. Both pop‐ups and pop‐unders are pages or sites that you did not specifically request and that may only display ads or bring up applets that should be avoided.
Most modern web browsers come standard with a pop‐up blocker and by default they block all pop‐ups. If you visit a link that contains a pop‐up, the browser will notify you that it has blocked it. If the pop‐up is on a trusted website, you will have the option to allow pop‐ups for the site.
Threat agents have found other creative ways to pop up ads or referred content. Through the use of JavaScript they can serve overlays over the original web page. In older web browsers, the JavaScript could even hold you hostage at the malicious page. However, most newer browsers limit the access of JavaScript and you can always close the web page.
If you continually receive pop‐ups or overlays, then you may be infected with malware or a rogue page is minimized serving the pop‐ups/overlays. A reboot should clear the problem, but it is also best to scan your operating system with antivirus/antimalware software.
There are several problems that plague digital certificates. Of the two major problems, one is related to the proper setting of time and date and the other is trust related. The time on the host should always be checked along with the expiration of the SSL certificate. If the certificate is expired, this will cause problems.
On the other hand, when an untrusted SSL certificate is encountered, the web browser will alert you that the SSL certificate is not valid, as shown in Figure 19.29. Every web browser comes with a list of trusted certificate publishers. If a certificate is issued to a website or is not trusted, a warning box will come up preventing you from visiting the site. You can click through the warning prompt and visit the site anyway, but the address bar will still read “Not secure” or display an unlocked lock icon during your visit.
FIGURE 19.29 An untrusted SSL certificate warning
The problem should always be investigated further, since information entered in the site could be intercepted if the site was hacked. The first step to diagnose is checking the hostname in the URL. All certificates must match the hostname in the URL that they are issued for. If you tried accessing the site by the IP address, this warning is benign and can be disregarded. However, if you entered the correct hostname, then the certificate should be inspected. Every web browser is different, but every web browser will let you view the certificate. In Figure 19.30 we can see that the certificate has been self‐signed.
Both the Issued To and Issued By fields in the certificate are the same. This is common when the website is in development, but it is not normal once the website has been placed into production. It is also common on network management equipment that allows configuration through a web page. Often the management web page will use a self‐signed certificate. For this purpose, the certificate can be imported into your trusted publisher certificate store so that it can be trusted in the future.
FIGURE 19.30 A self‐signed certificate
Pharming is a form of redirection in which traffic intended for one host is sent to another. This can be accomplished on a small scale by changing entries in the hosts file and on a large scale by changing entries in a DNS server, also known as DNS poisoning. In either case, when a user attempts to go to a site, they are redirected to another site. For example, suppose Illegitimate Company ABC creates a site to look exactly like the site for Giant Bank XYZ. The pharming is done (using either redirect method) and users trying to reach Giant Bank XYZ are tricked into going to Illegitimate Company ABC's site, which looks enough like what they are used to seeing that they give their username and password.
As soon as Giant Bank XYZ realizes that the traffic is being redirected, it will immediately move to stop it. But while Illegitimate Company ABC will be shut down, it was able to collect data for the length of time that the redirection occurred, which could vary from minutes to days.
Another form of browser redirection is called affiliate redirection. This type of browser redirection can be very subtle. For example, when you search for a product and click the link in the results, the malware will redirect your browser to the intended site with an affiliate link attached. Now anything you purchase will credit a commission to the person who redirected the browser with the affiliate link. This malware is usually related to an unscrupulous plug‐in in the browser.
Because an attacker can use many different tactics to launch browser redirection, the mitigation is not straightforward. However, implementing end‐user education, maintaining updates for browsers and operating systems, and ensuring that your antimalware/antivirus software is up‐to‐date are best practices to protect against browser redirection.
Best practices for malware removal is a key objective for the 220‐1102 exam. The best way to think about this is as a seven‐item list of what CompTIA wants you to consider when approaching a possible malware infestation. The following discussion presents the information that you need to know.
Before doing anything major, it is imperative first to be sure that you are dealing with the right issue. If you suspect malware, try to identify the type (spyware, virus, and so on) and look for the proof needed to substantiate that it is indeed the culprit.
You first need to identify the problem. This can be done with a multitude of tools, but hopefully your antivirus/antimalware software will be the first tool that helps to identify the problem. If the antivirus/antimalware software fails to identify the problem, then other third‐party tools must be used.
Earlier in this chapter, in the section “Troubleshooting Common
Microsoft Windows OS Problems,” we introduced you to Resource Monitor to
isolate performance problems. A similar tool, called Process Explorer,
can be downloaded from Microsoft Sysinternals. This tool allows a
different visualization from what Resource Monitor provides, as shown in
Figure
19.31. You can see the process list on the operating system; in this
case, there is a process called regsvr32.exe
. When you look
more closely, you can see that it is creating network traffic and is
very active on the operating system. The process is actually a
ransomware application calling out to command‐and‐control servers. It is
sneakily disguising itself as the regsvr32.exe
utility,
which is normally used to register DLLs.
Unfortunately, this lone example will not give you the expertise of a professional virus/malware hunter. However, it provides just one of many examples of third‐party software that can help you detect and identify viruses and malware running on a computer.
Many built‐in tools, such as netstat.exe
, can also
provide assistance. For example, the netstat ‐nab
command
enables you to view all the processes on the operating system and their
network connections. Using the netstat ‐nab
command is how
it was identified that something looked wrong with the
regsvr32.exe
process; otherwise, the command would have
looked like any other process on the operating system.
In addition to applications that can identify viruses and malware,
third‐party websites can aid in detection. One such website is
VirusTotal (www.virustotal.com
).
VirusTotal allows users to upload potentially unsafe applications. Their
service will scan the applications against more than 70 antivirus
engines and report if the signature is found. It's a valuable tool to
validate that you've found an application on your operating system that
is malicious. Many tools, such as Process Explorer, can even check
against the VirusTotal database.
FIGURE 19.31 Process Explorer
Once you have confirmed that a virus or malware is at hand, then quarantine the infected system to prevent it from spreading the virus or malware to other systems. Bear in mind that the virus or malware can spread in any number of ways, including through a network connection, email, and so on. The quarantine needs to be complete enough to prevent any spread.
Ransomware is probably the biggest risk, since it will spread through a network rapidly and encrypt files in its path. The ransom is usually equivalent to the number of files or the total size of files. In either case, over the past eight years it has made headline news, as it has taken down extremely large companies. In one instance, the Petya ransomware even took down most of the computers in Ukraine, along with several other countries.
If an infected system is discovered and needs further analysis, it should be quarantined from the network and put into an isolated network. This hot network is a place where it can be studied further, without repercussions to the operational network.
This is a necessary step because you do not want to have the infected system create a restore point—or return to one—where the infection exists. System Protection in Windows 10/11 is turned off by default. You can disable System Protection with these steps:
FIGURE 19.32 System Protection
The steps taken here depend on the type of virus or malware with which you're dealing, but they should include updating antivirus and antimalware software with the latest definitions and using the appropriate scan and removal techniques. You can update Microsoft Defender from the Microsoft Defender Security Center by clicking the task tray in the lower‐right corner of the desktop, then right‐clicking the Windows Security shield, and finally clicking Check For Protection Updates, as shown in Figure 19.33.
FIGURE 19.33 Microsoft Defender Security updates
Depending on the type of virus or malware, you may need to boot into safe mode or the Windows Recovery Environment (as discussed earlier in this chapter). However, the remediation of the virus or malware will be different for each situation. Microsoft Defender Security can automatically perform an offline scan. To perform an offline scan, click the task tray in the lower‐right corner, then right‐click the shield, select View Security Dashboard, click Virus & Threat Protection, click Scan Options, and select Microsoft Defender Offline Scan, as shown in Figure 19.34.
After you confirm that you will save your work by clicking Scan in the confirmation dialog box, the UAC will prompt you to answer Yes, and then Windows will reboot. The Windows Recovery Environment will boot and Windows Defender Antivirus will run, as shown in Figure 19.35.
FIGURE 19.34 Microsoft Defender Offline scan
FIGURE 19.35 An offline Microsoft Defender Antivirus scan
In some situations, such as in a ransomware attack, no remediation can be performed because the user files are encrypted. In these cases, the malware should be removed from the operating system, and then the user data must be restored from a backup. The unfortunate and terrifying fact when it comes to ransomware is that there will be loss of work.
In many instances, remediating the virus or malware is impossible because no one knows for sure what the virus or malware actually does. Antivirus researchers can document the delivery system that a virus or malware uses to enter your system. You can then patch the vulnerability, which is part of the remediation process. What antivirus research cannot do most of the time is document the payload of a virus or malware. This is because most of the time the payload is encrypted and changed, depending on the need of its creator. In these cases, the remediation might be to sanitize the drive and reinstall the operating system from an image or manually install it.
The odds of the system never being confronted by malware again are slim. To reduce the chances of it being infected again, schedule scans and updates to run regularly. Most antimalware programs can be configured to run automatically at specific intervals; however, should you encounter one that does not have such a feature, you can run it through Task Scheduler.
Microsoft Defender Security is scheduled to automatically scan the operating system during idle times. However, if you want to schedule a scan, you can use Task Scheduler:
Windows Defender Security is scheduled to automatically download updates during the Windows Update check, which is daily. If you require the latest updates, use either the Check For Updates option in the Windows Update settings or the Check For Updates option in the Microsoft Defender Security Center.
FIGURE 19.36 Creating a Windows Defender Security scheduled scan
Once everything is working properly, it is important to create restore points again, should a future problem occur and you need to revert back. You can enable System Protection by following these steps:
You can then manually create a restore point by clicking Create in the System Protection dialog box, typing a description (such as after remediation ‐ date), clicking Close (in the confirmation dialog box), and clicking OK to close the System Properties.
Education should always be viewed as the final step. The end user needs to understand what led to the malware infestation and what to avoid, or look for, in the future to keep it from happening again. This training can be formal training in a classroom setting, or it can be an online training in which the user must participate and answer questions.
It is common for large companies to require annual or biannual end‐user training for threats. It is becoming more common for training to be done online, and a number of companies offer this as a service. It is not uncommon for a company to send a phishing attempt to their employees. When an employee falls for the phishing attempt, they are automatically signed up for mandatory training. Incentives are also common, such as the first employee who notifies the IT department of the phishing attempt gets a gift card.
As mobile devices have been rapidly replacing the desktop and laptop machines that used to rule the workplace, the equipment an administrator must maintain has now evolved to cover a plethora of options. This section focuses on common mobile OS and application issues and some of the tools that can be used to work with them. A subsequent section will look at the same topics with more of a focus on security.
In the following sections, we will cover many common symptoms of problems that are common with mobile OSs and applications. In this section, we cover how to identify application symptoms as they appear in the objectives.
Mobile devices are generally error free and function fine. However, as we install applications to the mobile OS, we introduce potential problems. This happens because a mobile application is generally developed for the nominal platform. The application is also usually only tested on one or two devices as the developer sees fit. The developer can't account for every make and model of mobile device, and this is generally why we see application problems. In this section we will cover the most common application problems you may encounter with mobile devices. You will notice that they are all somewhat connected and usually have the same steps to rectify the problem.
If an application (app) does not load, it could be attributed to a multitude of reasons. One common reason is that the application is still running in the background and is not really loading; it is just becoming a foreground application and becoming the application in focus. When you close an application, it sometimes doesn't close all the way down and free up memory. Instead, it gets moved to the background and is technically still running.
The first thing you should try if an application is not loading is to force‐quit the application. To force‐quit an application on an Android device, press the tab view (usually the leftmost soft button) and then swipe the application left or right to close. On Apple devices, double‐tap the Home button or swipe up on the application you want to close.
Another common problem related to applications not loading is that sometimes the cache associated with the application is corrupted. This usually happens right after an application upgrades itself, which is all the time on mobile devices. On an Android phone, you can clear the application's cache by tapping Settings, tapping Apps, choosing the application, tapping Storage, and finally tapping Clear Cache. On Apple devices, tap Settings, tap General, tap iPhone Storage, choose the application, and finally tap Reset Cache On Next Launch. Clearing the cache will not affect the majority of the application's storage.
In many cases, an application will not allow you to clear its cache. If the option is not there, or after you've cleared the cache the application is still not loading, uninstall and reinstall the application. This option will (should) remove any data associated with the app.
You can remove Android applications by tapping Settings, tapping Apps, choosing the application, and tapping Uninstall. You can then visit the Google Play store and reinstall the application.
Apple is even simpler. All you need to do is tap and hold on an icon until all the icons dance back and forth. An X will be displayed in the upper‐left corner of the application icon. Simply tap the X to uninstall the application. You can then visit the App Store and reinstall the application.
Another less common problem with applications is that they crash or close out unexpectedly. This issue is less common since most applications run fine, but occasionally you will have a mobile application just close or crash out on you. This is frustrating, because it usually happens when you most need it and it is generally an intermittent problem. This is how this particular problem differentiates itself from an app not loading; it doesn't crash every time.
The recommendations are the same for a crashing or closing application. Ultimately, you need to find the series of events or steps to trigger the bug and crash or close the application. Once you've reproduced the problem, it's time to try to fix the issue by doing one or more of the following:
If none of these solutions works, then it may be time to check the vendor's site for any similar problems (and solutions) encountered by others. Support for applications on mobile devices is normally forum or community support. However, some paid mobile applications have email‐based support. Describe the issue, the device make and model, steps to reproduce the problem, and other applications installed on the device.
You may have one or more applications on your mobile device that fail to update. Generally, the developers will backward support an old application until it doesn't make sense anymore, because of new features. The developer's expectation is that their application will be updated on all the mobile devices so that features still operate as expected. So the user will most likely see crashes, closes, or other erratic behavior if the application is not up to date.
The Google Play store and the Apple App Store manage the purchase, installation, and upgrade of applications. This upgrade maintenance on installed applications happen in the background without the user ever knowing it even happens. However, from time to time you may encounter an application that does not want to upgrade automatically. The first troubleshooting step should be to try to manually upgrade it from the Play Store or the App Store.
If manually updating the application does not work, then there are a few other steps you can take to troubleshoot the application, such as force‐quitting the application and rebooting the phone to close any applications that may be stuck in memory. The next step is to temporarily disable any antivirus or antimalware software installed on the device. Then try to upgrade the application manually from the Play Store or the App Store.
Another consideration is to make sure that you are connected to the Internet via Wi‐Fi. Most app store applications will treat cellular data as a metered connection and will not automatically update applications. Also make sure that the app store is configured to automatically update applications.
If all else fails and the application still doesn't want to upgrade, you can try to uninstall the application and reinstall it. You can follow the same guidelines for a crashing or closing application. Before uninstalling the application, make sure you check the compatibility for the latest version of the application. You could uninstall the application and find you can't reinstall it, because your device does not meet the minimum specifications. This could also be the original problem, which would explain why the application won't update.
There are a number of reasons you can have performance issues with a mobile device. Most performance issues are directly related to the applications on the device. For example, an application may use too much processing time and it could cause poor battery life and performance. A group of applications can use all available RAM and starve the unit for processing space. In the following we cover the various performance problems you may encounter with a mobile device.
Slow performance is almost always related to RAM usage. Mobile operating systems operate the same as conventional desktop or laptop operating systems. The only real difference between traditional operating systems and mobile devices is the default action for an application is not to close it but to put it into the background. As programs are loaded into RAM, they allocate a percentage for their variables and inner workings. When RAM is filled up, the mobile device will swap background memory pages onto the built‐in storage; this process is similar to the page file process. This slows down the device because now its focus is on clearing up memory for competing applications.
Fortunately, most mobile devices allow you to see the RAM usage at a glance and over a longer period of time. On an Android device, tap Settings ➢ Battery And Device Care ➢ Memory. You will see the memory usage for the device, along with each application and its own usage. You can also clear up memory from here, which basically just closes the applications. Unfortunately, you cannot monitor RAM usage on an Apple mobile device, but a simple soft reset works just as well.
It is uncommon to have a performance problem attributed to high CPU on a mobile device. That is not to say it doesn't happen; it's just uncommon. To narrow down problems with a particular application performing slowly, reboot the device and launch only that particular application to isolate and monitor its performance.
Frozen is a silly term that technicians use when something is not functioning or responding. It doesn't really describe the temperature, just the functionality—a block of ice.
If the system is frozen (not responding to a single thing), it will appear with the same symptoms as a nonresponsive touchscreen. One of the ways you can differentiate between a frozen system/lockup and a nonresponsive touchscreen is if the device will soft reset. If the device will not soft reset, then a hard reset might need to be performed. The hard reset procedure should be research for the type of device you are trying to hard reset. For example, Apple has several different types of procedures depending on the model and generation of the device. Samsung also has hard reset procedures that differ based upon model of device.
If the restart does not work, plug in the device, let it charge (an hour or more is recommended), and then try to restart it. The power level in the battery can sometimes be so low that performance is turned all the way down and the device will appear unresponsive and frozen.
Random reboots and restarts could be a symptom of a hardware issue. They can also be related to a problem in the operating system. When a problem is intermittent or random, it is very hard to diagnose. However, by checking the following you can at least rule out some of the most common culprits of random reboots.
When an OS fails to update for a mobile device, there can be a number of reasons for the issue. However, as you will see, the common troubleshooting steps are no different than those for any other application on the mobile device.
If none of these suggestions allow you to successfully update the
mobile device, then you can try to manually install the operating system
update via the over‐the‐air (OTA) update. This will require a computer,
and depending on the vendor, this option may not work. The vendor needs
to support manual installation of updates with their third‐party
utility. Apple supports the iTunes application, which will work with the
device to upgrade the operating system. For more information; visit https://support.apple.com/en-us/HT212186
.
Batteries never last as long as you would like. Apple defines battery life as the amount of time a device runs before it needs to be recharged (as opposed to battery life span, which is the amount of time a battery lasts before it needs to be replaced). Tips for increasing battery life include keeping OS updates applied (they may include energy‐saving patches), avoiding ambient temperatures that are too high or too low, letting the screen automatically dim, and turning off location‐based services. You should also disconnect peripherals and quit applications not in use. Wi‐Fi, for example, uses power when enabled, even if you are not using it to connect to the network.
Outside of the preceding usage tips, sometimes battery life is attributed to a performance problem. High RAM usage can shorten battery life because power is expended on moving pages of memory in and out of RAM. This is done to keep the foreground applications running.
When most mobile devices get too warm, they will tell you that they need to cool down before they can continue to be used, and they will automatically take measures to protect themselves (turning off features, closing apps, and so on).
One of the most concerning reasons is the lithium‐ion (Li‐ion) battery contained inside these devices. When a Li‐ion battery gets too hot, you risk explosion or fire from a situation called thermal runaway. This is where the battery starts to get so hot that the separator inside the battery melts and causes a chain reaction.
Luckily, mobile devices shut down when they get too hot from ambient temperatures or internal temperature from the CPU. One of the best ways to prevent overheating is to avoid ambient temperatures that are too hot. Avoid having the device in direct sunlight for extended time periods, in a hot car on a summer day, or on top of a heat source. When the device does overheat, you can often help it cool down quicker by removing any protective case that may be there—and putting it back on later.
We rely heavily on our mobile devices, and the day‐to‐day functionality of our mobile devices relies heavily on connectivity to the outside world. The connectivity to the provider network (cellular) is generally supported by the provider, such as Verizon, T‐Mobile, or AT&T, just to name a few. When you have a problem with the cellular network, make sure that you have coverage and a fresh reboot. However, outside of the obvious you are best to escalate the problem to your provider. The day‐to‐day connectivity we are responsible for as A+ technicians will be covered in the following section.
There are a number of reasons why intermittent wireless connections can occur, but the two most common are lack of a good signal and interference. Increasing the number of wireless access points (WAPs) for coverage, or being closer to them, can address the lack of a good signal.
Interference can be addressed by reducing the number of devices competing for the same channel. In many instances, however, the interference may be coming from an external source, such as a microwave oven or even a Bluetooth device on the 2.4 GHz band. To avoid common interference of this nature, use an SSID that is dedicated to the 5 GHz band. Using the 5 GHz band won't guarantee you an interference‐free connection, since radar operates in this band. However, you will have better odds of selecting a channel without interference. In an effort to reduce interference and speed up wireless connectivity, 802.11ax has been developed to use a 6 GHz band. The Wi‐Fi Alliance ratified the standard as Wi‐Fi 6E in July 2020.
Another common problem with intermittent wireless is the auto‐reconnection feature for the SSID. Your phone normally goes into a sleep mode every so often. This is normal and saves battery life. One of the first devices to sleep for battery conservation is the wireless circuitry. When you power on your phone, the wireless circuitry needs to associate with your WAP, which will happen unless you do not tell it to automatically reconnect. You can verify your wireless SSID and the auto‐reconnection settings on Android by tapping Settings, tapping Connections, tapping Wi‐Fi, tapping the current SSID, tapping Edit, and then making sure Auto Reconnect is selected. On an Apple device, tap Settings on your Home screen, tap Wi‐Fi, tap the blue circled I next to your current SSID, and make sure that Auto‐Join is on.
A common cause of a lack of wireless connectivity is that the wireless radio has been turned off. It happens from time to time, when an application that controls the Wi‐Fi doesn't turn it back on. On an Android phone, swipe down from the status bar, then tap the wireless icon to make sure it is lit up. On Apple devices tap Settings, then tap Wi‐Fi, and make sure that the slider is turned to the right and lit up in green.
Another common cause of a lack of wireless connectivity is for a device to be in Airplane mode. When a mobile device is in Airplane mode, all the radios for the cellular network of the provider, Wi‐Fi, Bluetooth, and near‐field communication (NFC) will be turned off. This function was created so that in one tap you could comply with the Federal Aviation Administration (FAA) or the European Aviation Safety Agency (EASA).
To make sure that your device is not in Airplane mode, look on the upper status bar that displays your cellular strength. If a plane appears there, then your phone is in Airplane mode. On Android, you can swipe down, then tap the icon of the airplane so that it's no longer lit up. On Apple devices, tap Settings, then Airplane Mode, and then tap the slider so that it is no longer lit up. On both Android and Apple devices, both the cellular service and wireless networks will be restored after Airplane mode is disabled.
The lack of Bluetooth connectivity can also be attributed to the use of Airplane mode, or Bluetooth can just be turned off. So, be sure to check this setting in addition to Airplane mode. On Android devices, you can swipe down, then tap the Bluetooth icon if it is not lit up. On Apple devices, go to Settings, then tap Bluetooth if it is not lit up. Depending on the phone and version of the operating system, the Bluetooth icons will be displayed on your upper status bar and will look similar to Figure 19.37.
FIGURE 19.37 Bluetooth status icons
Lack of Bluetooth connectivity can also be caused when a device is not turned on and/or has an improper setting for pairing. A common pairing issue is not having the proper Bluetooth passcode entered for the device. Each device, when paired, has a specific code from the vendor. Most vendors use a common code, such as 1234, but the code could also be 0000, or any combination, so it's best to check the vendor's documentation for Bluetooth pairing information.
To pair or re‐pair a device, first ensure that the device is turned on and that it's discoverable. (Consult the vendor's documentation, as necessary.) On Android devices, tap Settings, then tap Connections, then tap Bluetooth (the phone will immediately start scanning for discoverable devices), select the available device, and enter the passcode from the vendor's documentation. On Apple devices, pairing can be performed by tapping Settings, tapping Bluetooth, tapping the device name, and entering the passcode from the vendor's documentation.
Near‐field communication (NFC) is a short‐distance wireless communication protocol. NFC is built into many mobile devices for the application of payment systems, such as Google Pay and Apple Pay. In addition, NFC is used for data exchange between mobile devices. A use case for this application is the transfer process when a new mobile device is purchased. You can simply tap the two devices together, and the new mobile device pulls the information from the existing mobile device. NFC nominally requires a distance of 4 centimeters or less to operate.
The first thing to check on the mobile device is that Airplane mode is not on. Airplane mode will impede the functionality of NFC, because NFC uses electromagnetic radio fields to enable communications between the phone and the NFC device.
The next thing to check is that it is not the reader. However, when attempting to pay for something and Google Pay or Apple Pay isn't working, anxiety often builds. Troubleshooting between you and the cashier is usually the last thing that comes to mind. However, briefly asking the cashier if anyone else has had an issue today can rule out the reader.
The case on the mobile device can also interfere with NFC communications. If the case is a ruggedized case and has an aluminum back, it could impede the NFC signal. Simply popping it out of the case can rule out the phone case as part of the problem.
Signing out of the mobile payment system will sometimes rectify the problem. In the process of signing back in, it will also validate if the provider's network is down, as this can often be a problem for mobile payment systems. Another troubleshooting step is selecting another credit card in the mobile payment system if you have multiple cards available in the app.
AirDrop is an Apple proprietary protocol used to quickly transfer files between iPhones, iPads, and Macs. AirDrop uses a combination of Bluetooth and Wi‐Fi to transfer files, such as photos, documents, and video, just to name a few. Bluetooth is used to broadcast, discover, and negotiate communications between the two devices. Wi‐Fi is then used as a point‐to‐point communication method for the two devices to transfer the file. As you may have noticed already, there are several different processes going on and because of this there can be issues. However, AirDrop between Apple products is a very polished protocol and usually works flawlessly.
The first item to check is that Airplane mode is not on and impeding communications. Airplane mode can turn off the two critical methods of communication that AirDrop requires to function: Bluetooth and Wi‐Fi. Newer iOS devices will not turn off Bluetooth automatically in Airplane mode. However, if you turn Bluetooth off while in Airplane mode, your phone will remember this setting. Just as your device must have Bluetooth and Wi‐Fi turned on, the other person needs to have both turned on. Also, make sure that both parties involved in the transfer do not have the personal hotspot on. If personal hotspot mode is on, it will impede the point‐to‐point transfer of the files.
The next obvious item to check is that the other person is within range of your device. Since Bluetooth will broadcast and discover the other person's device, the other person needs to be in range of your Bluetooth. If the person is out of range from your Bluetooth signal, then either the phone won't be discovered or the negotiation for the transfer will not succeed.
After you have checked the connectivity between devices and have
ensured that Bluetooth and Wi‐Fi are working accordingly, security is
the next item to check. When AirDrop first came out as a feature, it
lacked security and anyone was able to send files to anyone else. Apple
soon developed security, whereas only your contacts can send you a file
by default. If the other party is not a contact or you are not their
contact, you will not be able to receive or send to the other person
(respectively). If they are not in your contacts, then setting AirDrop
to receive from everyone will allow you to receive the file. However,
this should be a temporary setting, since this allows anyone to send you
files via AirDrop. More information on how to use AirDrop can be found
here: https://support.apple.com/en-us/HT204144
.
The autorotate function allows a phone to switch between portrait mode and landscape mode by sensing how you are holding the phone. Autorotate is a feature of convenience, because no matter which way you are holding the phone, you can read the information displayed. This of course is assuming you have the screen facing you.
The first item to check is that you do not have autorotate turned off or locked. On the Android operating system there are several different ways to check this, depending on the vendor and the Android version, so it is best to check your specific model of phone. On Apple devices, this can be checked by swiping down from the top‐right corner of your screen. When the Control Center opens, look for the circle with the lock in the middle and make sure it is not enabled. This is the Portrait Orientation Lock button, and if it is set, as shown in Figure 19.38, the screen will not autorotate.
FIGURE 19.38 The Portrait Orientation Lock button
If the autorotate function is not turned off or locked to a specific orientation, then you should suspect an application has possibly locked the orientation. A quick reboot will close out all running applications that could have a lock on the autorotate function. The reboot will also reset the autorotate service, in case it has crashed.
If closing the applications and rebooting the device does not remedy the issue, then you should suspect a hardware issue. There are third‐party tools available that allow you to test the sensors. Ultimately, the service center can verify that a sensor is bad and malfunctioning.
The preceding section—and its corresponding objectives—looked at mobile devices and focused on common OS and application issues; this section builds on that and focuses on security‐related issues. Once again, it looks at security concerns and common symptoms, differing only in that there is more of a focus on security. It needs to be pointed out, though, that CompTIA is stretching the definition of the word security to include more scenarios than many would typically consider. A fair number of the issues that appear in this section would have fit easily in the preceding section.
As it pertains to mobile devices there are a number of security concerns that you should be aware of. These concerns are the same for personal devices as they are for organizationally owned devices. Understanding these concerns will help you secure mobile devices and allow you to be more knowledgeable about the consequences.
An Android package (APK) is a developer file format for installation of Android applications. When developing an Android application, the developer will side‐load the application, usually using an Android tool such as Android Debug Bridge (ADB). The ADB will allow the developer to install the APK directly onto the device they are testing with.
When the developer wants to release the final version of their application, they will upload the APK to the Google Play store. The Google Play store will then rate the content, make sure that the APK is not malicious, and finally trust it for installation. The application is then distributed from a trusted source—the Google Play app store. If at any time the application changes in a way to break the terms of service (ToS), it will be banned from the Google Play store. Examples of breaking the ToS are malicious content, illegal content, and child endangerment, just to name a few.
When you install an APK from an untrusted source, you run the risk of security concerns. If the APK is not installable from a trusted source such as the Google Play app store, then the publisher might have broken the ToS. Some organizations will block the installation of APKs from untrusted sources to prevent data loss and malicious activity from mobile devices.
The developer mode on Android and Apple devices allows a developer to connect to the device via a USB connection. A developer can then create a bridge from a computer to side‐load applications as well as debug. The developer mode on Android offers myriad settings that can be changed, such as viewing running services, staying awake, setting a mock GPS location, and USB debugging, just to name a few. The Android operating system allows you to change and tweak settings that can be security concerns. Because of all these tweaks and settings changes, the developer mode is a security concern. By using the mock GPS locations, you can make your phone think that it is within a geolocation security perimeter and possibly circumvent security.
Apple's latest iOS does not have a developer mode like Android. However, you can perform development functions through the Xcode application on a Mac to help you develop iOS applications.
The development mode on Android can be accessed by navigating to Settings ➢ About Phone ➢ Software Information, then tapping Build Information seven times. The Development menu will be on the parent menu under About Phone.
The terms root access and jailbreak are synonymous with each other. However, root access is normally associated with Android and jailbreak is normally associated with Apple iOS. When you attain root access for an Android device, you literally have access as the super user named root. This access allows you to change various aspects of the operating system, such as turning on premium features like hotspot and tethering, and changing the operating system on the device by flashing new firmware.
When you root an Android phone and flash a new firmware, you no longer have the patch management from the parent vendor of the phone. For example, if you root a Samsung phone and install Havoc OS, then you no longer receive the Samsung security policies and updates. The Google Play store will also be affected, since only older versions of the applications will be available for download. This means that patch management will lag behind.
When jailbreaking the Apple iOS, you attain a higher level of access to the iOS, just as you do when you root an Android device. The motivation to jailbreak an iOS device is to access premium features, such as adding photo modes, hiding apps, and installing newer features on old devices. Jailbreaking is a security concern because you are modifying the operating system of the phone. Malicious software can easily be installed, and with the new level of access, it can hide itself.
Many organizations that employ mobile device management (MDM) create policies to prevent rooted and jailbroken devices from attaining access to organization information. This policy might restrict getting email from the device, or it could restrict access to the network in your organization.
It is best to use the operating system that has shipped with your mobile device. In many cases, once a device has been rooted or jailbroken, it cannot be reverted back to stock.
A malicious application is any application with malicious intent for the user or the user's device. You can find malicious applications on both Android and Apple mobile platforms. You can identify a malicious app by reviewing its permissions and contrasting it with its function—for example, if you download a camera application and it asks to record calls. There should be some suspicion and you should revoke the permissions and potentially uninstall the software.
The obvious security concern for a malicious application is that it has excessive permissions to your data or your organization's data. Periodically you should review the permissions each application installation has on your device. Another rule of thumb is to look before you click and think of the security concerns before installing software.
A bootleg application is a premium application that has been cracked or nullified to remove the digital rights management (DRM). Bootleg applications can be found for a number of premium mobile apps; they generally are in the form of an APK. Bootleg applications usually contain malicious software, because that is how the bootlegger makes their money. This obviously goes back to the discussion of verifying the source of the application and being cautious with APK installations.
Application spoofing is the act of a malicious application spoofing a legitimate application. Application spoofing is much more prevalent in the mobile application marketplace and can be observed on both Apple and Google mobile platforms.
The security concern for application spoofing is the possibility of installing malicious software on your device. Disguising itself as a legitimate application, it has the same security concern of access to personal data and organizational data. You can prevent application spoofing by verifying the name of the publisher, the icon for the application, and the number of installations. If you are downloading a social media platform and it has only 100 downloads, this should send up a red flag. Another method of validating the application is to read reviews for the application. Again, think before you click and download the software.
The following sections discuss common symptoms of problems with mobile operating systems and application‐related security issues. As with so many issues involving troubleshooting, common sense is most important. Using logic and a systematic approach, you can often identify and correct small problems before they become large ones.
High resource utilization can be a telltale sign that a device is running more than you think it should be—perhaps the drives are being searched or the camera is recording your every move. Monitor for high resource usage. If you discover it, find out what is causing it and respond appropriately. In this section I will cover some basic components and what you should look out for.
A higher than normal amount of traffic can be a symptom of a security issue. Spikes in traffic for extended periods of time can mean that data is being stolen from your device or relayed through your device. You should have an idea of the volume of traffic you would normally expect on your device.
You can start closing applications as you watch the volume of traffic. When the volume of traffic subsides, you probably have your culprit. Then, as covered previously, check the application's permissions to see if the application is malicious or is compromised in some way. Clear cache and data, then uninstall and reinstall the application from a known good source. This procedure may identify the issue or verify that you had a malicious application installed. A telltale sign is if the application is no longer available.
Exceeding the limits on data plans can also be symptomatic of a security issue. Data usage coincides with the volume of traffic previously discussed. A malicious application running on the device could be used to send spam or malware, or to conduct a multitude of other malicious activities from your device. A malicious application can also continually spy on you and your data. All of these activities can rob you of precious data in your data plan, pushing you over your contracted limits.
Excessive malicious use of data on a mobile device can be mitigated with two methods:
While applications, normal usage, and so on can contribute to sluggish performance, another offender could be malware or a virus. When you observe sluggish performance on your device, you need to investigate the symptom, as it could indicate a security issue. Check RAM and CPU usage and if an application is out of control, it could be infected with malware. It is best to run an antivirus/antimalware scan on the device to check it thoroughly.
Not every problem is related to a possible security threat. The normal search for a cellular signal can be just as taxing on the device. However, if you are in the normal locations in your day‐to‐day travels, such as work and home, and still experience sluggish performance, you may have an application problem (out of memory) or a security threat. In either case, the issue needs to be checked out quickly.
When you have limited Internet connectivity on your mobile device, you should not immediately think that the limited connectivity is a security symptom. Limited Internet connectivity can be a result of many different problems. Mobile devices are very susceptible to limited Internet connectivity because they contain small transceivers for wireless and cellular communications. The radio firmware also plays a big role in choosing the right radio frequency and is often the problem with connectivity.
Taking everything into consideration, if you know nothing has changed in the wireless environment, your firmware is the same, you've rebooted, and no one else is having a problem, you can suspect this is a security symptom. Malicious applications will often monopolize your connection or proxy the connection in an effort to sniff usernames and passwords. Both monopolization of the connection and proxying of the connection can create intermittent Internet connections. If your connection is being monopolized, then you will see high network bandwidth as previously discussed. There are a number of ways that a network connection can be proxied for malicious purposes, such as DNS proxy, network transmission, and wireless, just to name a few. The ideal way to combat this issue is with a mobile device firewall and antimalware software.
If all the usual causes have been reviewed, then you should suspect no Internet connectivity to be a security‐related symptom. In some rare instances, malware will cause the mobile device to have no Internet connectivity. This generally happens because a DNS server that the mobile device is pointed to for malicious reasons has ceased to function. Or the relay server that is proxying the connection has ceased to function. There are a multitude of reasons why no Internet connectivity would be experienced. As previously recommended, a good mobile device firewall and antimalware should be employed.
Complete failure of an Internet connection is easier to diagnose than an intermittent problem. Therefore, a factory reset of the device should aid in figuring out whether the hardware is bad or the problem is software, such as malware, triggering the problem.
When a mobile device is experiencing a high number of ads, this is a security‐related symptom of adware. Adware is a type of malware that pops up ads for malicious purposes, usually to entice the user to buy something. Adware is usually the result of installing a malicious application on the mobile device. There are two ways to diagnose the problem of adware. The first depends on the number of ads and the frequency of the ads. You can start by uninstalling applications until the ads stop popping up. This method is preferable because it is probably the quickest.
The second method, a factory reset, is much more effective, but it will not identify the malicious application. A factory reset will remove any malware along with all the applications. Of course, you can start installing applications until the ads start. However, resetting a phone is a pretty anxiety‐producing process for avid mobile device users frantically trying to sign back into applications. Therefore, the first method may be preferred. Once the application is uninstalled and the device is factory reset, be sure to install antimalware software on the device.
A fake security warning on any system is a big red flag and a security symptom. Mobile devices are not exempt from fake security warnings, although these warnings on full operating systems are more common. Regardless, when a fake security warning is discovered, it should be treated as if malware is installed on the device. The device should be factory reset and antimalware should be installed prior to reinstalling the applications.
Unexpected application behavior is not always an indication that you have been infected with malware or have a security symptom. Applications have unexpected behavior all the time. However, when an untrusted and newly installed application behaves in an unexpected way, this could be a symptom of a security problem with the application.
When you experience unexpected application behavior, you should immediately question the trust of the application. This can be done by reading reviews for the application to determine if others have run into similar problems. Also judge the application by its installer base, which is proportional to the reviews. For example, if an application has 100 installs and only has 5 people commenting that it's a great application, then this app should fall under suspicion.
The first step to be taken is to scan the device for malware. If the application is flagged as malware, then a factory reset should be performed. Then install only the trusted applications that you use daily.
When authorized users access devices through unintended connections or unauthorized users access stolen devices, they can access the data on the device. Outside of these risks, there is always the risk of loss or theft of the device itself.
Therefore, security for mobile devices should be applied in a layered approach. Antivirus and antimalware software should be installed on the device to protect it from malicious applications. In addition, a mobile firewall should be installed along with the antivirus and antimalware software. Fortunately, there are third‐party security suites that can protect you from all these threats.
Mobile device management (MDM) software should also be employed. This software is like the Swiss army knife of security for mobile devices. It can require passcodes, the installation of antivirus, antimalware software, mobile firewalls, current updates, and so much more. One of the most notable features is the ability to remotely wipe the device in the event it is stolen or lost.
In addition, there should be a firm policy that details the encryption of data in use, at rest, and in transit. A written policy should be drafted along with procedures on how to deal with leaks when they occur. These policies are usually drafted with an insurance company in order to protect an organization in case of a data leak of personal information.
This chapter addressed systematic approaches to working with computer problems as well as troubleshooting operating systems and resolving security‐related issues. In our discussion of troubleshooting theory, you learned that you need to take a systematic approach to problem solving. Both art and science are involved, and experience in troubleshooting is helpful but not a prerequisite to being a good troubleshooter. You learned that in troubleshooting, the first objective is to identify the problem. Many times, this can be the most time‐consuming task.
Once you've identified the problem, you need to establish a theory of why the problem is happening, test your theory, establish a plan of action, verify full functionality, and then document your work. Documentation is frequently the most overlooked aspect of working with computers, but it's an absolutely critical step.
Next, we discussed operating system–related troubleshooting issues. First, we looked at common symptoms, and then we discussed some tools that can be helpful in solving problems.
Finally, we looked at security‐related troubleshooting issues as well as best practices for removing malware. Again, we started by looking at common issues and then how to solve them.
The answers to the chapter review questions can be found in Appendix A.
bootrec
option
can be used in Windows to rebuild the boot configuration file?
/fixboot
/rebuildbcd
/scanos
/fixmbr
ntoskrnl.exe
winload.exe
winresume.exe
msconfig.exe
ntbtlog.txt
regedit
bootrec
msconfig.exe
You will encounter performance‐based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
List, in order, the seven best practice steps associated with malware removal.
THE FOLLOWING COMPTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
CompTIA has identified that with the rapid adoption of cloud‐based services, system administrators have a need for scripting and remote access now more than ever. Scripting allows you to manage a system as if you have logged in and performed the task yourself, such as purging old log files from a web server to make space. Remote access technologies allow you to connect to these remote systems so that you can manually administer them.
This chapter looks at a variety of scripting languages. After completing this chapter, you may not be an expert at writing scripts—the goal is to familiarize you with the various scripting languages so that you can learn the basics of scripting and their purposes. We'll also look at various remote access methods and their security considerations.
Before diving into scripting, we'll begin by discussing the differences between a programming language and a scripting language. When you write a script, you are basically using a high‐level programming language. An example of a low‐level programming language is assembler, also known as assembly language. Figure 20.1 shows what we call the “programming pyramid.”
FIGURE 20.1 Programming pyramid
The lowest layer is the actual hardware, which is the central processing unit (CPU). Directly above the hardware layer is the executable machine code that interacts with the hardware to make it perform some useful function or process. The operating system is usually programmed in a low‐ to mid‐level programming language, such as C/C++ or even assembly language. It is then compiled into executable machine code. Applications are often programmed in high‐level languages, such Java, C#, or VB.NET, and are compiled to executable machine code or an intermediate code. Scripts, however, are not compiled; they are interpreted, as we discuss in this section.
Depending on which layer you program on, you gain some advantages,
but at the same time you are also capped by some of the limitations of
each layer. For example, a program created in assembly language will be
quite complex because you will need to perform low‐level functions just
to add two numbers together. However, because you are at such a low
level, you have direct access to the hardware, so you are limitless in
your control of the hardware. If you were to use a higher‐level
language, such as C/C++, the application would be relatively easy to
write; adding two numbers is as simple as c = a + b;
.
Because it is a higher‐level language, however, you do not have the same
lower‐level control over the hardware as with assembly language. You
have access only to what the compiler will understand and compile to
executable machine code, as shown in Figure 20.2.
FIGURE 20.2 Compiling a programming language
Scripting languages do not need to be compiled. They are interpreted by the shell, command line, or external interpreter, as shown in Figure 20.3. The interpreter reads the script and executes the instructions in the operating system. The big difference is that you do not need to compile scripts to executable machine code, as in the previous example of a C/C++ program. Unfortunately, the higher the level, the less control you have over the process. The benefit is you can create a script rather quickly, and you don't need to compile it.
FIGURE 20.3 Interpreting a scripting language
Another big difference between applications and scripts is that scripts require applications to complete their purpose. If an application doesn't exist for a function in your script, then you should evaluate whether a script is the right course of action. For example, if you need to resize pictures, an application must exist that can be called via the command line that will take the appropriate input and resize the pictures. If an application doesn't exist that can resize the picture, then you might need to write an application that resizes a picture in lieu of a script. A script cannot normally create functions of this nature; it can only call on them.
There are several different types of scripting languages. Each scripting language has its own nuances and syntax. This section discusses some scripting basics that you can apply to any scripting language. You'll only have to adjust the syntax to the scripting language you've selected to get the desired outcome.
A variable is a symbolic word or combination of letters that
can be used to hold a value. The value can be defined as a number, text,
Boolean, or array. Numbers can be either integers or
floating‐point, but the scripting languages you choose must
support floating‐point math. Floating‐point math is a value that
contains a decimal number, whereas integers are whole numbers. Text is
also called a string and has no numeric equivalent. A string
containing 30
will not be a value of 30
; it
will be only be a string of characters containing a 3
and a
0
. Mathematical computations cannot be performed on a
string. Boolean values are true or false values, and true values and
arrays are collections of strings, numbers, and Boolean values.
You might see a programming language described as a strongly typed language. That means that the variable must be defined as to what type of value it will hold and how long the value will be. This is yet another difference between scripting languages and programming languages: scripting languages do not need variables to be defined and are normally not typed. Most of the time, you can just load a variable with a value and it is dynamically typed. This is not the most economical use of memory, but that shouldn't be an issue, because scripts are often simple ways to complete a simple task; therefore, it doesn't matter that loading a variable with a value and having it dynamically typed isn't the most economical use of memory.
The name of a variable should have some meaning
and cannot be a reserved word. So if you had a variable that needed to
hold a counter, you could simply create a variable called
count
. However, you would need to ensure that
count
was not used elsewhere in the program for a function;
otherwise, it's a reserved word in the scripting language and
cannot be used as a variable.
The following is an example of a variable named count
being created in the PowerShell scripting language and loaded
with the value 1
. Generally, you load a value by specifying
the variable, followed by an equal sign, and then the value. In
PowerShell, the syntax dictates that you place a $
in front
of the variable in order to notify the interpreter that you are
addressing a variable. Each scripting language is slightly different,
but the concept remains the same. You can also view the contents of the
variable either by typing the variable name and pressing Enter or by
using the echo
command before the variable.
PS C:\Users> $count = 1
PS C:\Users> $count
1
PS C:\Users> echo $count
1
PS C:\Users>
As previously mentioned, variables hold values in scripts so that we can do things such as counting. Environment variables also hold values, but they are used for the environment of either the system or the current user. Environment variables often hold values like the path to an executable or the location of the temporary folder.
Environment variables are inherited in a structured fashion: system, user, and then program. They can be overwritten at any underlying level, but only for that entity. For example, if a script changes the user environment variable for the temporary folder, the change is applicable only for that script unless the user environment is changed—then it will affect all scripts run by a user. Here are the types of variables you will encounter in the operating system:
Your scripts should have a level of end‐user readability. The code should describe what is going to happen without external documentation. Although we strive to create eloquent scripts that read like computer poetry, sometimes we need to make comments inside the code.
Each language has a different way to make comments. The following example uses the PowerShell syntax. As you progress through the next section, “Scripting Languages,” you will see the way to comment in the various languages. You'll notice in the following code that we've added some commenting that is generally used for script creation. The three basic components of the comments are author name, authored date, and the purpose of the script. Although the following comment block is best practice, it is not required; one‐line comments inside the code are allowed also. In this scripting language, PowerShell, the comment is preceded by the # symbol. Other scripting languages have their own syntax, and you'll see the various examples throughout this chapter.
# Jon Buhagiar
# 10/16/18
# This script will output a directory listing of the C drive
Get-ChildItem C:\
When writing a script, you may need to create a controlled loop
inside the script. An example of a controlled loop is to read a file and
then do some things for each line contained in the file. You can use two
basic loops in your scripts: for
loops and
while
loops.
A for
loop is a stepped loop with a defined beginning
and a defined end; each step is defined as well. The following example
shows a for
loop for PowerShell that starts at 1 and counts
to 10. The output is the word Number and an incrementing
number. You can see that the variable is initially set to 1
with ($count=1
), and a test is done to check if the
variable is less than or equal to 10 with ($count ‐le 10
).
The for
loop is stepped by adding 1 to the existing number
with ($count++
). This is just one example of a
for
loop, and every scripting language is a bit different
in syntax, but the concept is the same.
For($count=1; $count -le 10; $count++) {
Write-Host "Number $count"
}
[ Output of Script ]
Number 1
Number 2
Number 3
Number 4
Number 5
Number 6
Number 7
Number 8
Number 9
Number 10
A while
loop continues to loop
until either it is exited or a condition is met. The while
loop has no defined beginning, only a defined end, and it can be exited
without consequence to the function. In the following code, the variable
count
is set to 0
, and then the
while
loop begins. Inside the while
loop, the
count
variable is incremented by 1
each time
it loops with ($count++
). The loop will continue as long as
count
is less than 11
with
($count ‐lt 11
). The output will be identical to the prior
example.
$count = 1
While ($count -lt 11) {
Write-Host "Number $count"
$count++
}
[ Output of Script ]
Number 1
Number 2
Number 3
Number 4
Number 5
Number 6
Number 7
Number 8
Number 9
Number 10
Branch logic enables code to deviate, or branch, depending
on a condition. The if
statement is the most common
conditional branch logic found in scripts. The if
statement
is usually followed by a condition and a then
clause. It
can even use an else
clause. The then
clause
is executed only if the condition is true; if the condition is not true,
the else
clause is executed.
In the following example, the string str
is set to
1
. The if
statement then checks the condition
of str
being equal to 1
with
($str ‐eq 1)
. It is true, so the next statement of Write‐Host "Yes"
is executed. It's
important to note that PowerShell implies the then
clause;
only the else
clause needs to be spelled out. If
str
were anything other than 1
, the condition
would be false, executing the else
clause of
Write‐Host "No"
. This is an example for PowerShell, but in
the next section, “Scripting Languages,” you will find many different
examples for each scripting language.
$str=1
If ($str -eq 1) { Write-Host "Yes" } Else { Write-Host "No" }
[ Output of Script ]
Yes
Now that you understand the basic differences between a programming language and a scripting language and some scripting basics, let's look at several different scripting languages you are likely to encounter as a technician. As we cover the various languages, we'll highlight which operating systems they are common to or natively supported on—which is often the biggest deciding factor when you choose a scripting language.
Windows batch scripts have been around since the release of
Microsoft's Disk Operating System (DOS) back in 1981. The original
script interpreter was command.com
, and since then
Windows NT was released, which included an updated command‐line
interpreter called Command Prompt, or cmd.exe
. The
original file extension used with batch scripts was .bat, but today both
.bat
and .cmd
can be used to initiate a batch
script because they are both associated with the command‐line
interpreter cmd.exe
.
A Windows batch script is probably the fastest way to get something
done when all you need is a list of commands run one after the other.
For example, say you need to create new user accounts for a school. You
can get the usernames in an Excel sheet, and you can create a script
from the entries by using an Excel formula, adding in the column of the
username. As an example, the formula
="NET USER" & A1 & "PassW0rd /ADD /DOMAIN"
copied
into cell B1 will produce the line you need to execute. Then you just
need to drag the formula down, and the script will be built. A quick
copy and paste, and the script will look similar to the following
output. It's a quick and dirty way to create user accounts. Using the
combination of an Excel sheet and a copy‐and‐paste into a batch script
is the fastest way to build and execute a laundry list of commands.
NET USER UserOne PassW0rd /ADD /DOMAIN
NET USER UserOne PassW0rd /ADD /DOMAIN
NET USER UserOne PassW0rd /ADD /DOMAIN
NET USER UserOne PassW0rd /ADD /DOMAIN
[ Output Cut ]
Batch scripts can also contain logic. The
following is a simple batch script that tests whether a variable of
FLIPFLOP
is equal to 0
using an
if
statement. If FLIPFLOP
is equal to
0
, the script will proceed to write to the screen the word
Zero, set FLIPFLOP
to a value of 1
,
and then jump to :LOOP
. Because FLIPFLOP
is
set to 1
, it is not equal to 0
; so, the
else
clause in the if
statement will be
processed, and the word One will be printed to the screen, and
FLIPFLOP
will then be set to 0
, and the script
will jump to :LOOP
again. This will proceed until Ctrl+C is
pressed to stop the processing of the script.
@ECHO OFF
REM FlipFlop Script
SET /A FLIPFLOP=0
:LOOP
IF %FLIPFLOP% EQU 0 (ECHO Zero && SET /A FLIPFLOP=1) ELSE (ECHO One && SET /A FLIPFLOP=0)
GOTO :LOOP
PowerShell allows for the automation and management of the Windows operating systems, as well as cloud‐related services such as Microsoft Azure and Microsoft 365. One of the limiting features of any scripting language is its ability to perform a needed task. PowerShell was created to be totally extensible. It was built on the .NET Framework Common Language Runtime (CLR). Any programmable library a .NET application has access to, PowerShell can use, which is what makes it so extensible. It has been used since Windows Server 2008 as a configuration tool for the operating system. In fact, most of the time when you configure a service in the Server Manager tool, you actually run a PowerShell command in the background. Many of the GUI wizards allow you to see the PowerShell script that will be executed so that you can reuse the line in a script of your own.
PowerShell introduced the concept of cmdlets. PowerShell has over 100 cmdlets installed, called the core cmdlets. You can always add your own cmdlet by creating a PS1 script and installing it into the PowerShell cmdlet store in the operating system, as you will do in Exercise 20.2. A cmdlet is simply a verb and a noun separated by a dash. Here are a few examples:
Get‐Item
Gets
an item such as a directory listing, environment variable, or Registry
keySet‐Item
Changes
the value of an item, such as creating an alias or setting an
environment variableCopy‐Item
Copies
an item, such as a file or folderRemove‐Item
Deletes
an item, such as a file, folder, or Registry keyMove‐Item
Moves
an item, such as a file, folder, or Registry keyThese are just a few of the built‐in core cmdlets for PowerShell.
Others exist for ‐Item
, such as Rename‐Item
,
New‐Item
, Invoke‐Item
, and
Clear‐Item
. Each one performs a corresponding action on the
noun following the dash. You can even extend the functionality of a
command with your own PS1 cmdlet. There is a Get‐Verb
command so that you can see all the appropriate verbs that you can use
for your own command.
If you use the Get‐Item
cmdlet and specify a folder,
information about that folder will be returned. If you want to see all
the other folders contained within that folder, you can use a
*
wildcard. Or you can use the Get‐ChildItem
cmdlet and specify the directory, as follows:
PS C:\Users\UserOne> Get-item c:\*
Directory: C:\
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 11/28/2017 9:52 PM Dell
d----- 5/16/2018 9:32 PM NVIDIA
d----- 4/11/2018 7:38 PM PerfLogs
d-r--- 5/28/2018 10:04 PM Program Files
d-r--- 8/12/2018 5:31 PM Program Files (x86)
d-r--- 5/28/2018 6:11 PM Users
d----- 10/18/2018 10:17 PM Windows
PS C:\Users\UserOne> Get-ChildItem c:\
Directory: C:\
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 11/28/2017 9:52 PM dell
d----- 5/16/2018 9:32 PM NVIDIA
d----- 4/11/2018 7:38 PM PerfLogs
d-r--- 5/28/2018 10:04 PM Program Files
d-r--- 8/12/2018 5:31 PM Program Files (x86)
d-r--- 5/28/2018 6:11 PM Users
d----- 10/18/2018 10:17 PM Windows
When you use the dir
command in
PowerShell to view a directory listing of files, you are actually using
something called an alias. The alias then calls the
Get‐ChildItem
cmdlet. To see all the aliases on the
operating system, you can use the Get‐Alias
cmdlet. You can
see all the commands mapped over to PowerShell cmdlets, as follows:
PS C:\Users\UserOne> get-alias
CommandType Name
----------- ----
Alias % -> ForEach-Object
Alias ? -> Where-Object
Alias ac -> Add-Content
Alias asnp -> Add-PSSnapin
Alias cat -> Get-Content
Alias cd -> Set-Location
Alias CFS -> ConvertFrom-String
Alias chdir -> Set-Location
Alias clc -> Clear-Content
Alias clear -> Clear-Host
Alias clhy -> Clear-History
Alias cli -> Clear-Item
Alias clp -> Clear-ItemProperty
Alias cls -> Clear-Host
[ Output Cut ]
PowerShell also has a great way to develop scripts in what is called an Integrated Scripting Environment (ISE). The ISE allows you to write a script and test it without having to switch back and forth between a text editor and the execution environment. In addition to the writing and execution environment in the same window, there is a type‐ahead feature that allows you to pick a command if you remember only the first few letters, as shown in Figure 20.4. You can also use the Tab key to complete a command, which makes writing scripts easy when you know the first couple of letters. Formatting is also automated and makes for easy‐to‐read scripts. The formatting highlights variables and commands in different colors so that you can differentiate between the two.
Before any script can be executed on the Windows operating system,
you must first allow scripts to run. By default, any PowerShell scripts
will be blocked. Exercise 20.2
shows you how to “unrestrict” PowerShell scripts using the
Set‐ExecutionPolicy
cmdlet.
FIGURE 20.4 PowerShell ISE
Visual Basic scripts—also known as VBScripts—are scripts based on Microsoft's Visual Basic language. VBScript technology has been around since Windows 98 and Windows NT, so it is very mature, but it is slowly being replaced by PowerShell, as we'll explain. VBScripts also run only on Windows operating systems, unlike PowerShell scripts.
VBScript technology is based on the Component Object Model (COM) to allow interaction with the operating system. Any COM object that can be instantiated can be accessed through a VBScript. As you learned in the preceding section, PowerShell is based on the .NET Framework. The Component Object Model predates the .NET Framework as a way to register programming libraries with the operating system. By default, VBScripts cannot access programming libraries that have been written for the .NET Framework, which is why VBScripts are slowly losing popularity and support by Microsoft.
VBScripts are still extremely useful when it comes to a structured language for creating login scripts. As previously mentioned, scripts are an interpreted language and are not compiled. VBScripts are no different; they require an interpreter to process. There are three main interpreters that can process VBScripts: Windows Scripting Host (WSH), Internet Information Services (IIS) Active Server Pages (ASP), and Internet Explorer. ASP and Internet Explorer are deprecated, so we will focus on WSH.
The Windows Scripting Host is an environment that allows you to run
VBScripts from the command line. By default, when a VBS script
is run, a program called wscript.exe
processes the script.
Any output will be sent to a Windows message box that you must close by
clicking OK. This can be quite annoying if you have multiple lines of
output, as each line will pop up a message box you have to close. A
VBScript can also be executed with the cscript.exe
program.
This version of the VBScript processor outputs to a console window—the
name stands for console script.
VBScripts are normally edited in Notepad and saved with a
.vbs
extension. A number of third‐party editors allow
type‐ahead. As previously mentioned, type‐ahead is a feature that allows
for the completion of code after a few letters have been entered.
Microsoft supports an editor called Visual Studio Code, which you can
download from https://code.visualstudio.com
.
The following VBScript will perform the same output as the two preceding examples. As you can tell, the syntax is a little different.
‘FlipFlop Script
Do Until FLIPFLOP> 2
If FLIPFLOP = 0 Then
Wscript.Echo "Zero"
FLIPFLOP = 1
Else
Wscript.Echo "One"
FLIPFLOP = 0
End If
Loop
Bash stands for the Bourne Again Shell. Bash is backward compatible with its predecessor, called the Bourne shell (or sh). Whether you're using the Bash shell or the sh shell, either will perform similar functions for scripting purposes. The shell itself interprets commands for scripting, similar to the Windows DOS shell. However, the Bash and sh shells are much more advanced in their structure and functionality than DOS. Linux/UNIX shell scripts are basically how the operating system boots itself up and starts all the necessary services.
Linux and UNIX scripts often end with the
.sh extension to signify to the end user that the file contains
a script (text) and not an executable code. Shell scripts do not require
the .sh
extension in order to be executed. They do,
however, require execute permissions to be applied. This can be done
with the chmod
command, as you learned in Chapter 16. Another nuance is that you must
specify the relative path using ./
and then the script—for
example, ./script.sh
.
Shell scripts can be edited in any text‐based editor. The vi editor was probably the first editor—and on older systems, your only choice. Today, there is a multitude of editors from which to choose, such as pico/nano, jed, Gedit, and Kate/Kwrite, just to name a few. Any of these editors will do a fine job for the basic editing of scripts. Some people find that opening two consoles is the fastest way to write a script on Linux/UNIX. One console serves as the editing console, with the text editor loaded and saving changes that are made, while the other console serves as the execution environment.
The following is a simple shell script that performs identical
functionality to the scripts you have seen throughout this chapter.
There are two major things to note with this example. The first is the
syntax of the script, which differs slightly from the other examples but
basically looks similar. The second thing to note is the directive on
the first line. The directive tells the operating system which shell to
process the script in. The #!
is called a
hashbang. The hashbang lets the operating system know the path
of the script interpreter that will be used to process the script—in
this case, it is /bin/bash
.
#!/bin/bash
#FlipFlop Script
while [ !$FLIPFLOP ]; do
if [ $FLIPFLOP -eq 0 ]
then
echo Zero
FLIPFLOP =1
else
echo One
FLIPFLOP =0
fi
done
The Python scripting language was first released in 1991,
and it started to gain popularity over the past 10 years or so. It is
not normally installed by default in any operating system. If you want to use Python on a Windows or macOS
operating system, you need to visit www.python.org
to
download the latest version, and then install it. If you are running a
Linux operating system, you will install Python through the package
management system of the operating system.
The Python installation does not include an integrated development environment (IDE), also known as an Integrated Scripting Environment (ISE). The installation contains only the interpreter and some documentation to get you started. You can use any text editor to create and edit scripts. An extremely popular Python IDE is the Free Community version of PyCharm, by JetBrains. The IDE allows for script development similar to the PowerShell ISE but with many more features, as shown in Figure 20.5.
FIGURE 20.5 The PyCharm IDE
Python was created to be an easy scripting language to learn, and it is very forgiving with syntax. It is probably one of the best languages that we can recommend to start scripting in because it is so forgiving, unlike Bash or Windows batch scripting. Python also doesn't lack in features. Like VBScript and PowerShell, Python is extensible and can use external libraries. One disadvantage to using Python is that it's not been widely adopted in enterprise environments. So you will not be able to build login scripts with it, because the interpreter does not have direct ties into the operating system. You'll often find that your flexibility is limited when you try to integrate a Python script into an enterprise process.
Python scripts normally end with the .py extension so that
the end user can identify the scripting language used within the file.
The .py
extension also allows the script to be launched on
Windows operating systems. With Linux/UNIX and macOS systems, however,
the hashbang defines the interpreter to use and generally looks
something like #! /usr/bin/python3
.
The following script was written in Python and follows the same functionality as the previous examples in this chapter. As you can see, the program's readability is similar to the earlier examples and the syntax differs slightly.
#FlipFlop Script
FLIPFLOP = 0
while FLIPFLOP < 2:
if FLIPFLOP == 0:
print("Zero")
FLIPFLOP = 1
else:
print("One")
FLIPFLOP = 0
JavaScript and the Java programming language have several similarities—for example, they are both object‐oriented languages, and their syntax is similar. However, the similarities end there, because Java is a programming language and JavaScript is a scripting language, with the difference being that Java is compiled whereas JavaScript is a scripting language.
JavaScript is mainly interpreted in web browsers to allow for
interactive web pages. JavaScript is one of the three core web
technologies; the other two are Hypertext Markup Language
(HTML) and Cascading Style Sheets (CSS). JavaScript can
also be adapted to run outside the web browser with a runtime called
Node.js
. Both web browser JavaScript and
Node.js
scripts end in the .js extension, which
identifies the contents of the file as JavaScript code.
JavaScript can be edited with any text‐based editor. Microsoft Visual Studio Code does an excellent job of formatting for Windows‐based editing of JavaScript. Brackets is another popular editor for JavaScript and runs on a variety of platforms.
The following is an example of JavaScript code. The example is
similar to the previous examples in relation to functionality. This
particular script was coded to run in Node.js
, since
JavaScript normally outputs to HTML. The structure is similar to the
previous examples. As always, every language differs slightly in
syntax.
//FlipFlop Script
FLIPFLOP = 0;
while (FLIPFLOP < 2) {
if (FLIPFLOP == 0) {
console.log ('Zero');
FLIPFLOP = 1;
} else {
console.log ('One');
FLIPFLOP = 0;
}
}
Scripting is a fairly new objective for the CompTIA A+ exam, and it may look overwhelming to you. You are not expected to have mastered the skills of writing scripts, such as those included in this chapter. They have been included so that you can visualize the various languages and understand the syntax. You will be required to read and understand what a script is doing, as well as identify the various elements of a script, such as variables, branch logic, basic loops, and the type of variables.
In addition to basic comprehension of scripts, you will need to know the various use cases where you may find yourself scripting something together. The basic rule of thumb should always apply: if the task is repetitious and needs to be completed several times, then a script should be developed. For example, if you needed to create 20 users, then a script is your best choice. Time versus reward should be calculated based on the time it takes to develop the script compared to the time it will take to complete the job. One consideration is how often you must complete the task; if you have three users you need to create every week, then developing a script is a good investment of time.
In this section, we'll examine several different scenarios in which you might find yourself developing scripts to complete.
The most compelling situation for the use of a script is one that requires some form of automation for a task. Scripts are perfect for basic automation tasks, such as creating users, adding users to groups, or even more intricate and sophisticated tasks. When a task is automated with a script, it guarantees that the task will flow the same every time it is executed.
There are two types of scripts that you will most likely create: scripts for automating your own tasks and scripts that automate tasks for others. During your career you will most likely find tasks that you have to do over and over. These tasks should be automated as much as possible, and each repetitive task should have its own script. As a best practice, you should create a folder that contains all the scripts you use on a daily basis. This way, you always know where they are, and when you move to a new computer, you can simply copy them over. Obviously, when you create scripts for others, you won't use them on a daily basis. However, these scripts should also be grouped together in a common folder, since you will probably reuse a part of one script or the entire script for another user.
The following is an example of a task that should be automated. It
assumes that you have the Remote Server Administration Tools (RSAT)
installed on your system, which includes the dsquery
and
dsget
commands. The command line will query Active
Directory for users with test in their name. The output from
the initial command of dsquery user ‐name *test*
accomplished that task and outputs the distinguished name (DN) of the
user. We then pipe that output to the
dsget user ‐samid ‐ln ‐fn ‐email
command line and that
retrieves the username, first name, last name, and email address of each
user.
C:\sys>dsquery user -name *test* | dsget user -samid -ln -fn -email
samid fn ln email
testuser1 user1 test test.user1@wiley.com
testuser2 user2 test test.user2@wiley.com
testuser3 user3 test test.user3@wiley.com
C:\sys>
This is a handy command line, but typing it in every time you need an
answer like this is tedious. Luckily, we can write a simple script where
we just need to change one part of the script. Where we had
test
, we simply replace it with %1
to capture
the first argument from the Windows batch script. Then we just start
Notepad and copy the line in and save it as lookup.cmd
.
dsquery user -name *%1* | dsget user -samid -ln -fn -email
Then when we run it, we get the following output:
C:\sys>lookup test
C:\sys>dsquery user ‐name *test* | dsget user ‐samid ‐ln ‐fn ‐email
samid fn ln email
testuser1 user1 test test.user1@wiley.com
testuser2 user2 test test.user2@wiley.com
testuser3 user3 test test.user3@wiley.com
C:\sys>
The script works just like before, but each command is echoed to the
console. This is easily fixed by editing the file
lookup.cmd
with the following additional lines:
@echo off
dsquery user -name *%1* | dsget user -samid -ln -fn -email
Now when the file is executed, you won't see the echo of the command actually executed, but only the output of the command, as shown here:
C:\sys>lookup test
samid fn ln email
testuser1 user1 test test.user1@wiley.com
testuser2 user2 test test.user2@wiley.com
testuser3 user3 test test.user3@wiley.com
C:\sys>
We can refine the script further to add some branch logic, so if the user doesn't supply an argument, it explains the argument required:
@echo off
if {%1}=={} (
echo.
echo You must supply the following.
echo ex. %0 {name of user}
echo.
goto :END
)
dsquery user -name *%1* | dsget user -samid -ln -fn -tel -email
:END
The process of refining the script is called the development process. The first script you build will have common parts that you will reuse, such as the error handling when no argument is supplied. In this example we used a Windows batch script, but it's recommended to use whichever scripting language(s) you are comfortable with. There is no right way to develop a script, and the purpose of this example is to show you the thought process involved in writing a script.
Another common task might be to restart multiple machines. You can solve this problem in several different ways, and it's all about the end goal of the task. If this is a one‐time task, then you could write a simple script, such as the following:
shutdown /s /m \\computer1 /t 0
shutdown /s /m \\computer2 /t 0
shutdown /s /m \\computer3 /t 0
shutdown /s /m \\computer4 /t 0
If this is a recurring task, then you could
develop a script that reads a text file into a variable and restarts
each computer. It sounds complex, but it really isn't when you break
down the task into smaller pieces. For example, say you need to read a
file of computer names, and then walk through each line in the files and
restart the computer. For this example, let's switch to PowerShell,
since it allows reading of files and parsing them. Start by creating a
text file named list.txt
with four computer names in it: computer1
through
computer4
. Now we'll test the script by saving it as
restartcomps.ps1
.
$list = Get-Content .\list.txt
ForEach($line in $list) {
Echo $line
}
The script will read the contents of the list.txt
file
into the variable $list
. Then the script will load the
variable $line
for each entry in the $list
collection. When we run the script, we get the following output:
PS C:\sys> Set-ExecutionPolicy Unrestricted
Execution Policy Change
The execution policy helps protect you from scripts that you do not trust.
Changing the execution policy might expose you to the security risks described
in the about_Execution_Policies help topic at https:/go.microsoft.com/fwlink/?LinkID=135170. Do you want to change the execution policy?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): y
PS C:\sys> ./restartcomps.ps1
computer1
computer2
computer3
computer4
PS C:\sys>
Now that we have the first two parts of the script working, we just need to add the functional piece. While we are refining the script, let's also remove the variable and read the lines directly:
foreach($line in $(Get‐Content .\list.txt)) {
Write-Output $line
Restart‐Computer ‐Computer $line
}
When the script is run, the output will remain the same, but you'll also be restarting the computers that are written as output to the console. We can make the script fancier by allowing arguments for the file to read, but we think you have the idea behind the script development process. It is a work in progress, and you always have some outstanding code to automate your job.
Remapping network drives can be done in a multitude of ways with
VBScript, PowerShell, Window batch scripts, or some other favorite
language. However, Windows batch script and PowerShell are the most
common. In the following, we see a script that maps a few drives using a
Windows batch script. When we map a drive, we are mounting a remote
filesystem through to a drive letter. For example, we can mount the
remote filesystem of \\server1\files
to a local drive
letter of m:
net use m: \\server1\files
net use n: \\server2\files
net use o: \\server3\files
The same script can also be developed in PowerShell:
New-PSDrive –Name "m" –PSProvider FileSystem –Root "\\server1\files"
New-PSDrive –Name "n" –PSProvider FileSystem –Root "\\server2\files"
New-PSDrive –Name "o" –PSProvider FileSystem –Root "\\server3\files"
These scripts don't need to be complex like the previous scripts. We just want to obtain a reproducible result every time the script runs.
When scripting is combined with the installation of applications, you
can perform a number of functions that are not possible on their own.
For example, you can write a script that installs the prerequisites for
an installation and then only succeeds if the subsequent installations
are successful. A lot of these scenarios are going to be custom to your
specific needs and environment. The following is an example of a
PowerShell command that will install an application called
App.msi
:
Invoke-CimMethod -ClassName Win32_Product -MethodName Install -Arguments @{PackageLocation='\\server\installs\App.msi'}
A few assumptions are made with the PowerShell example. The first assumption is that you are calling an MSI installer. If you aren't, the code will not work, since every installer has its own methods for invoking an installation. The second assumption is that you are an administrator of the operating system. Writing a PowerShell script will not circumvent Windows security.
Creating a scripted installation of an application is rewarding, but it is also time consuming. You will most likely have to refine your script several times before it works as expected. This means that you will need to install the application many times to get it right. However, if you have an application that requires installation across a number of computers, you can easily reclaim the time spent on the script.
Backups should be trusted to back up software that is engineered to expire media, rotate media, and generally back up and restore the data and systems that the organization depends on. This type of software is considered off‐the‐shelf backup software and scripts are not expected to replace this software. However, by using scripts you can automate pieces of the backup process to make the process much more reliable.
A common example of automating backups with scripts is the backup of SQL databases. You can use off‐the‐shelf backup software to back up the database. However, one common problem is that the agent installed on the SQL server that facilitates the backup will take a snapshot of the database and back it up in whole. This might seem like what we are trying to achieve as an end goal—except when you try to restore it, you'll quickly find that you need to restore the database in whole, even though you only need one table of records.
If you preprocess the database with a script, you can export it to a file. This file can then be backed up and restored to any SQL server, even down to the record level. The following is a maintenance script used with Microsoft SQL to back up the database to a file. The script is written in the SQL scripting language.
USE TestRecords
GO
BACKUP DATABASE [TestRecords]
TO DISK = N'D:\DBBackups\TestRecords.bak'
WITH CHECKSUM;
You can automate many other types of backups, such as custom application data, Registry settings, email data, and any other type of data you can address with a script. However, one of the most common backups you will automate with scripts are SQL database backups.
Although you can use applications such as Device Manager to gather information about a computer, using the GUI is not scalable when you need to gather information from a large group of computers. This is where scripts come in handy to gather information and even export the information to a file for you. In the following examples, we will look at some simple PowerShell commands for gathering information and exporting the information. However, you can use any scripting language you like and Windows batch scripting is very commonly used as well.
PS C:\sys> Get-Service
Status Name DisplayName
------ ---- -----------
Running agent_ovpnconnect OpenVPN Agent agent_ovpnconnect
Stopped AJRouter AllJoyn Router Service
Stopped ALG Application Layer Gateway Service
Stopped AppIDSvc Application Identity
Stopped Appinfo Application Information
[ Output Cut ]
The Get‐Service
cmdlet will show
you all the services running on the operating system. It will output a
long list and will display each service's Status, Name, and DisplayName.
By piping the output to the Export‐Csv
cmdlet, we can
output a lot more detail and send it directly to a comma‐separated
values (CSV) file. An example of this is in the following command
string:
PS C:\sys> Get-Service | Export-Csv .\Services.csv
The Get‐Service
cmdlet isn't the only command you can
use to gather data; there are various other commands that let you gather
data and export it. Obviously, you can chain these commands together in
a script and gather a large amount of data. You can then use scripts to
mine the data for specific information, such as a service state or free
space.
The Windows platform has been automatically patching itself since the release of Microsoft Update and Windows XP. The feature of Microsoft Update and the Windows platform have evolved since the early release. It has turned into a robust feature to keep Windows up to date. However, if you need an immediate result to patch a security hole, then scripting a solution is the best remedy.
There are a number of ways to get patches to immediately install on Windows. The approach you choose depends on your patch management solution, such as Windows Update, Windows Server Update Services (WSUS), Microsoft Endpoint Configuration Manager (MECM), or a third‐party patch management solution. For the remainder of this section, we will use the Windows Update solutions for examples.
If you want to script Windows Updates to initiate patching via
Windows batch scripting, then the utility of choice is
wuauclt.exe
. The command can be directed to detect patches
with the /detectnow
argument. However, don't expect
anything elaborate; the utility will not notify you that it is doing
anything. If you want to watch the progress, then you'll need to keep an
eye on the log file, C:\Windows\WindowsUpdate.log
.
An alternative is to use PowerShell and a PSGallery
module call PSWindowsUpdate
. This module allows you to
fully automate the patch management and obtain a great level of detail.
For example, after installing the module you can execute the command
Get‐WindowsUpdate
to obtain the pending list of available
updates, as shown here:
PS C:\sys> Install‐Module ‐Name PSWindowsUpdate
NuGet provider is required to continue
PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet
provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or
'C:\Users\bohack\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider by
running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install
and import the NuGet provider now?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
Untrusted repository
You are installing the modules from an untrusted repository. If you trust this repository, change its
InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from
'PSGallery'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): y
PS C:\sys> Get‐WindowsUpdate
ComputerName Status KB Size Title
------------ ------ -- ---- -----
CERES ------- KB5007406 80MB 2021-11 Cumulative Update Preview for .NET Framework 3.5, 4.7.2 and 4.8 for…
CERES ------- KB2267602 2GB Security Intelligence Update for Microsoft Defender Antivirus - KB2267602 (…
PS C:\sys>
You can then execute the command
Get‐WUInstall ‐AcceptAll –AutoReboot
and the operating
system will begin to install the updates and automatically reboot. These
are just a few ways you can script the installation of Windows
Updates.
Along with the knowledge of scripting comes great power and even greater responsibility. There are several key points that should be considered before scripting a solution and during the development of the script. In the following section we will cover some of these key considerations.
Of course, there may be considerations outside of these CompTIA objectives. The one consideration that has resonated throughout the previous section is the decision between investing time to develop a script or just completing the task. This is something that you will need to take into account before you even begin scripting.
A common pitfall with scripting is inadvertently introducing a security issue. Security issues come in all different forms when scripting. The most common security issue is the embedding of security credentials in scripts. Regardless of how secure you think the script will be, it's a bad habit and should be avoided at all costs.
That being said, there are instances where
embedding a password cannot be avoided. In these situations, you should
use mechanisms that are supported in the scripting language to act as a
digital locker for your password or methods that encrypt the password.
One such mechanism is the ConvertTo‐SecureString
cmdlet. It
means more lines of code, more time, and sometimes more aggravation in
getting it to work, but the benefit is a secure system.
Another potential problem is the inadvertent introduction of malware with scripting. This usually happens when you need additional functionality, such as email capability or changing a system setting. Windows batch scripting has its limitations, and it is easy to use a third‐party tool to get the last bit of functionality out of the script. However, that third‐party tool may have malicious intent, and you could introduce that malicious code to the system in which the script is executed. PowerShell has its own potential malware risks; when you add an untrusted module, the same could happen.
The best way to avoid these pitfalls is to not shortcut the solution. Do your homework and avoid embedding passwords or using untrusted modules or third‐party utilities in your scripts. It means more work, but this should be factored into your cost‐benefit analysis when taking on developing a script.
Scripts are generally created to change various user and system settings. However, sometimes you can inadvertently change a system setting that is not meant to be changed. If this happens, it can adversely affect the system it is executed on. Therefore, it is always best to test the script on a sacrificial computer. A virtual machine is the best option, since it can be reverted to a prior snapshot if something unexpected happens.
An example of a simple mistake is using the setx
command
incorrectly. The setx
command modifies the system
environment variables and is used by Windows batch scripting. By using
the statement setx path "$path;c:\sys" /m
, you could cause
irreversible problems. This statement will set the path
variable to $path;c:\sys
, because the incorrect syntax was
used to address the variable of path
. The correct statement
is actually setx path "%path%;c:\sys" /m
. The statement now
sets the path
variable to the current path variable
contents (%path%
) and the addition of
c:\sys
.
The mistake can be as simple as mixing up syntax from one scripting language with syntax from another scripting language. However, some commands are not as forgiving, such as the previous example. When using PowerShell the mistake can be very subtle, since many commands cover both user and system settings. Always test your script before you execute it on a live system. Otherwise, you might be reinstalling an operating system in addition to completing your scripting task.
Scripting allows you to automate processes, which is the main reason we are developing the script in the first place. However, automation can sometimes create problems that we can't foresee during the development of the script. As an example, you can quickly gobble up all the usable RAM resources with the following Windows batch script. The script will launch instances of Notepad until the operating system runs out of memory.
:loop
start notepad.exe
goto :loop
Although this script is obvious in its intent, it is an extreme example of automation that if left unchecked will crash the system. Windows batch scripting is not the only scripting language where things can go awry; you can do the same with PowerShell. This example will create an HTML‐formatted file of the directory structure:
Get-ChildItem c:\ -Recurse | ConvertTo-Html | Out-File -FilePath .\output.html
The problem with this example is the sheer size of the resulting file. This statement will recursively list all the files from the C: drive down. When the file is launched in the web browser, the browser will quickly run out of memory trying to display the large file. More elaborate scripts may automatically open the web browser and immediately crash it.
To prevent similar problems from happening in your environment, you should test and monitor your scripts. By testing your script solution for errors or conditions that can run the system out of resources unintentionally, you identify and correct problems that would otherwise cripple the system. Monitoring should be performed after the script is in place, such as a script that is started via the Task Scheduler. You want to identify any spikes in resource usage after the solution is in place.
Remote access technologies have been around for as long as networks have been in existence. As a technician, you can't always be in all the places you are needed. Remote access technologies allow you to do just that. You can create a remote connection to a distant computer on the network and change its configuration. In this section, we will discuss several different types of remote access technologies that are covered on the CompTIA 220‐1102 exam, as well as their security implications for a network.
Remote Desktop Protocol (RDP) is used exclusively with
Microsoft operating systems. An RDP client—the Remote Desktop
Connection client, also known as the mstsc.exe
utility—is built into the Microsoft operating system, as shown in Figure
20.6. The Remote Desktop Connection client can provide remote access
as though you were sitting in front of the keyboard, monitor, and mouse.
The Remote Desktop Connection client and RDP can transport several other
resources, such as remote audio, printers, the clipboard, local disks,
and video capture devices, as well as any other Plug and Play (PnP)
devices.
RDP communicates over TCP port 3389 to deliver the remote screen and connect the local mouse and keyboard for the RDP session. RDP uses Transport Layer Security (TLS) encryption by default, and it provides 128‐bit encryption. Microsoft allows one remote user connection or a local connection on desktop operating systems via RDP, but not both at the same time. On server operating systems, Microsoft allows two administrative connections, which can be a combination of local or remote access but cannot exceed two connections.
FIGURE 20.6 The Remote Desktop Connection client
Microsoft also uses RDP to deliver user desktops and applications via terminal services. When RDP is used in this fashion, a centralized gateway brokers the connections to each RDP client desktop session. Terminal services require terminal services licensing for either each user connecting or each desktop served. RDP can also be used to deliver applications to end users using Microsoft RemoteApp on terminal services. When RemoteApp is used, the server still requires a terminal services license. However, just the application is delivered to the user rather than the entire desktop.
A virtual private network (VPN) extends your company's internal network across the Internet or other unsecured public networks. This remote access technology allows clients and branch networks to be connected securely and privately with the company's network. There are several different ways a VPN can be used in your network architecture, and we will cover them in the following sections. A VPN achieves this private connection across a public network by creating a secure tunnel from end to end through the process of encryption and encapsulation. The encryption protocols used vary, and we will cover them as well. Since a tunnel is created from end to end, your local host becomes part of the company's internal network along with an IP address that matches the company's internal network. We don't have to be bound to only TCP/IP across a VPN, since this technology can encapsulate any protocol and carry it through the tunnel.
Over the past 10 to 15 years, using high‐bandwidth connections to the Internet has become cheaper than purchasing dedicated leased lines. So, companies have opted to install Internet connections at branch offices for Internet usage. These lines can serve a dual purpose: connecting users to the Internet and connecting branch offices to the main office. However, the Internet is a public network and unsecured, but site‐to‐site VPN connections can fix that. Companies with multiple locations have reaped the benefits of creating VPN tunnels from site to site over the Internet by ditching their leased lines, installing VPN concentrators at each location, and creating VPN tunnels. Site‐to‐site VPN is also much more scalable than leased lines because locations only need a connection to the Internet and a VPN concentrator to be tied together. Figure 20.7 details two locations tied together with a VPN tunnel. The magic happens all in the VPN concentrator. Since VPN concentrators also have a routing function, when a tunnel is established, a route entry is created in the VPN concentrator for the remote network. When traffic is destined for the branch office with a destination network of 10.2.0.0/16, the router encrypts and encapsulates the information as data and sends it to the other side of the tunnel over the Internet. This is similar to a host‐to‐site VPN, the difference being the routing is performed in the VPN concentrator. When the packet is received on the other side of the tunnel, the VPN concentrator decapsulates the data, decrypts the packet, and sends the packet to its destination inside the branch network. It is common to find that the appliance performing VPN is also the firewall and router. Firewalls today are sold with VPN software built in and licensed accordingly.
FIGURE 20.7 A typical site‐to‐site VPN
Client‐to‐site VPN connectivity is a remote access strategy for mobile access. It can be used for telecommuters, salespeople, partners, and administrative access to the internal network resources. The key concept is that VPN access is granted on an individual or a group basis for the mobile users. Using the example in Figure 20.8, you can allow salespeople to connect to the corporate network so they can update sales figures or process orders. This can all be done securely over the Internet while the users are mobile and have access to a network connection.
FIGURE 20.8 A typical host‐to‐site VPN
When a client computer establishes a VPN connection, it becomes part of the internal corporate network. This happens by assignment of an IP address from the internal corporate network. In Figure 20.9, you can see a mobile device such as a laptop with a VPN client installed in the operating system. When the connection is established with the VPN concentrator over the Internet, a pseudo network adapter is created by the VPN client. In this example, the pseudo network adapter is assigned an internal IP address of 10.2.2.8/16 from the VPN concentrator. The laptop also has its own IP address of 192.168.1.3/24, which it uses to access the Internet. A routing table entry is created in the operating system for the 10.2.0.0/16 network and through the pseudo network adapter. When traffic is generated for the corporate network, it is sent to the pseudo adapter, where it is encrypted and then sent to the physical NIC and sent through the Internet to the VPN concentrator as data. When it arrives at the VPN concentrator, the IP header is stripped from the packet, the data is decrypted, and it is sent to its internal corporate network resource.
FIGURE 20.9 Client‐to‐site VPN connection
There are many different VPN solutions on the market. Each one of them traditionally requires the installation of a VPN client. However, there are a growing number of products that do not require the installation of a client; these products are called clientless VPN solutions. The VPN client is the web browser on the mobile device requiring connectivity back to the corporate network. The VPN appliance acts as a reverse proxy to the various resources internal to the organization.
Virtual Network Computing (VNC) is a remote control tool for the sharing of desktops. The VNC client normally operates on TCP port 5900. VNC is similar to Microsoft RDP, with the exception that VNC is an open source protocol and typically allows only one console session per operating system. It supports encryption via plug‐ins, but it is not encrypted by default.
VNC operates in a client‐server model. The server allows for the remote control of the host on which it is installed. It is normally configured with a simple shared password, but it can also be configured with Windows groups. Several different clients can be used, such as RealVNC, TightVNC, and many others, but they all perform similarly.
Telnet is an older remote access protocol for Linux, UNIX, and network device operating systems. Telnet provides an unencrypted remote text console session for remote access purposes, communicating over TCP port 23. It is not considered secure and should not be used, because a malicious user can eavesdrop on the session. Many network devices still use Telnet for configuration purposes. However, SSH, if available, should be configured and used in lieu of Telnet.
Because Telnet is unsecure and deprecated, many operating systems have removed the Telnet client and the server service. Since Windows 7, the Telnet client that comes with the operating system requires installation as a Windows feature. In Windows Server 2016, both the Telnet client and server were removed completely. Telnet has largely been replaced with SSH.
Secure Shell (SSH) is commonly used for remote access via a text console for Linux and UNIX operating systems. The SSH protocol encrypts all communications between the SSH client and the SSH server using TCP port 22. The SSH server is also known as the SSH daemon. SSH uses public‐private key pair cryptography to provide authentication between the SSH client and server. SSH can also use a key pair to authenticate users connecting to the SSH server for the session, or a simple username and password can be provided.
It is important to understand that both the user and their host computer are authenticated when the user attempts to connect to the server. During the initial connection between the user's host computer and the SSH server, the encryption protocol is negotiated for the user's login, and the cryptography keys are verified. PuTTY, shown in Figure 20.10, is a common SSH client that is free to download and use. PuTTY provides various methods of connecting to a remote device, including Telnet and SSH.
FIGURE 20.10 PuTTY SSH Client
Beyond logging into a Linux or UNIX server for remote access, SSH can provide remote access for applications. Through the use of SSH port forwarding, the application can be directed across the SSH connection to the far end. This allows applications to tunnel through the SSH session. It also encrypts application traffic from the client to the server, because it is carried over the SSH encrypted tunnel. SSH can behave similarly to a VPN connection, but it is more complex to set up.
In the early days of your organization, it may have been simple to monitor and manage all the various systems from one location. However, as your organization's footprint grew across different sites and many employees now work from home (WFH), it is more difficult to monitor and manage the various systems. Systems need patching, must be monitored for disk space, and have hardware and applications installed—and these are just a few of the tasks.
This is where a remote monitoring and management (RMM) solution can help IT across your enterprise or multiple enterprises and give you a holistic view of your enterprise. There are several different RMM solutions on the market today. Among the most popular solutions are managed service providers (MSPs) that manage your enterprise for a contracted price. These service providers ultimately use RMM software to monitor and maintain the enterprise. The MSP will require your organization to install an agent that is configured to report back to the MSP's RMM software.
You can also purchase a cloud‐based or on‐premises solution for maintaining your organization with your own IT department. Every vendor of RMM solutions has their own variation of features, which makes up the product's secret sauce. These solutions also require the installation of an agent that reports back to the RMM solutions. Regardless of which product you choose, there are two main features to RMM: the remote monitoring feature and the management feature.
Remote Monitoring The remote monitoring feature of an RMM system can monitor a number of different components, such as security, hardware, applications, and even activity on the operating system. These are just a few of the components of the remote monitoring feature found in RMM systems. The list grows depending on the RMM vendor. The most common monitoring is the security of the various systems across your enterprise, such as patch levels, antimalware status, and exploits. Another monitored component is the hardware and applications installed across the enterprise. Monitoring the hardware can identify your assets, as well as identify when upgrades are needed. Application monitoring can identify problems with a specific application or your vulnerability in the event the application needs to be patched. These are just a few monitored components—the list grows with every release of new RMM software by vendors.
Reporting is a major component of the remote monitoring capabilities of RMM systems. The reporting can be active or passive for most systems. In an active reporting system, the RMM software will compile a report periodically and alert you when a major change is discovered. As an example, if over 30 percent of your computers are vulnerable to a new exploit, the system can be configured to alert you. You may also set up a similar threshold alert for disk space. The passive reports can be run and give you an overall picture of your network and are typically in the form of a drill‐down report. A drill‐down report allows you to view the overall health and drill down to specific areas of interest to a specific detail.
Since the release of Windows XP, Microsoft has included various tools to allow remote assistance to the Windows operating system. In addition to Microsoft's proprietary remote assistance tools, many vendors have entered the market. As a result, there are a number of third‐party remote assistance tools freely available and out on the market, with varying costs. Let's explore some of the built‐in capabilities of the Windows product and some of the features of third‐party products.
Microsoft Remote Assistance (MSRA), or msra.exe
, was
released with Windows XP. The tool itself is dated, but it is still
available in Windows 11, as shown in Figure 20.11. The interface has not
changed much since its original release, nor has the functionality. The
MSRA tool allows a trusted helper to assist the user when the user
creates a solicited request by
choosing Invite Someone You Trust to Help You. This option will generate
an Invitation.msrcIncident file that you can save as a file or email to
the trusted user if you have email set up on the operating system, as
shown in Figure 20.12. The third option is Easy
Connect, which uses IPv6 and peer‐to‐peer networking to transfer the
request.
FIGURE 20.11 MSRA tool
FIGURE 20.12 Inviting the helper
Before the user can send a request, the operating system must allow Remote Desktop connections. You can access this setting by clicking Start ➢ System ➢ About ➢ Advanced System Settings, then choosing the Remote tab, shown in Figure 20.13. You then select Allow Remote Connections To This Computer in the Remote Desktop area and click OK. By default, Allow Remote Assistance Connections To This Computer is already selected.
When the trusted helper gets the Invitation.msrcIncident file, the file will launch the MSRA tool and attempt to connect to the user. The user will then supply the session password to the trusted helper. Once the user and helper are connected and the password is entered on the trusted helper's MSRA tool, the user will be prompted to allow the helper. The result is a remote connection to the user, as shown in Figure 20.14. The default view of the MSRA tool is viewing mode. The trusted helper can request control of the operating system, and the user must allow the helper to control the operating system by answering the prompt. The MSRA tool has a chat feature that allows the trusted helper to communicate with the user.
However, you must keep several items in mind when using the MSRA
tool. The first is that you will not find the tool in any menu. The only
way to launch the tool is to enter
msra.exe
in a run dialog
box. Another consideration is that MSRA only works well inside the
organization. The use of routers and firewalls breaks the functionality
of MSRA over the Internet. Easy Connect was added to allow IPv6 to be
used via an IPv4 network to work around this problem, but Easy Connect
is not set up by default. Outside of these considerations, MSRA is a
useful tool for technicians.
FIGURE 20.13 Allowing Remote Desktop Connections
FIGURE 20.14 MSRA tool connected to the user
Microsoft realized the deficiencies in the original MSRA tool and included another remote access tool in Windows 8/8.1 and Windows 10/11 for soliciting help called Quick Assist, shown in Figure 20.15. Quick Assist operates similarly to third‐party services such as Splashtop, GoToMeeting, and Join.me. These tools allow for remote desktop sharing and assistance for a user through screen sharing. Unfortunately, Quick Assist does not support file transfers. Quick Assist works behind routers and firewalls. The original Remote Assistance tool introduced in Windows XP did not work behind routers and firewalls. Quick Assist is slowly replacing the Remote Assistance application in current Windows versions. Quick Assist even has its own shortcuts for easy access by the user.
FIGURE 20.15 Windows Quick Assist tool
You can launch Quick Assist on Windows 10/11 by clicking the Start menu and typing Quick Assist. Once the Quick Assist tool is started on your computer (the assistant), choose Give Assistance. You will then be prompted to log in with a Microsoft account. Once you are signed in, you will be presented with a six‐digit number, which will be valid for 10 minutes. On the end user's (person being helped) computer, the end user will launch Quick Assist and select Get Assistance. The end user will need to obtain your six‐digit code and enter it into the dialog box when prompted. Once the number is entered on the end user's side, both computers will connect, and you will be asked what type of control you want to provide for the end user: View Only or Full Control. Of course, the end user has to allow the control, but you can provide remote assistance without installing any third‐party software.
Quick Assist offers chat functionality to the assistant and the end user in a chat window. You can launch this window by clicking Toggle Instruction Channel on the Quick Assist toolbar. You can also launch Task Manager automatically in later versions of the tool. By clicking the Task Manager item on the Quick Assist toolbar, you can remotely launch Task Manager on the end user's operating system. An annotation tool allows you to draw on the end user's screen to guide them. Select this tool by clicking the Annotate option on the Quick Assist toolbar. However, just like the MSRA tool, Quick Assist does not have a file transfer utility. Any file transfers must be done with a file sharing service, where the user downloads the file from the Internet link you provide.
Third‐party tools such as Splashtop, GoToMeeting, and Join.me have their own unique features. Each tool fits into a category of screen sharing, videoconferencing, file transfer, or desktop management. When choosing a third‐party tool, you should evaluate the main requirements and the category the software excels at. Let's examine the various categories to help you better identify the best software for your needs.
Each of the remote access technologies discussed in this chapter have security considerations. Before implementing a remote access technology, you should determine what type of data is going to be exchanged and whether the level of encryption is sufficient. Telnet, which is not encrypted, might be fine if simple data is being transmitted, such as temperature or humidity readings. However, if passwords, configuration, or any type of sensitive information will be transmitted, then a more secure protocol, such as Secure Shell, should be used.
Beyond the data in transit and the method that provides the transit, there are other security considerations. One of the biggest concerns is any agent that awaits connection and is exposed to the Internet. A threat agent can exploit these software packages and compromise a host via these software packages. To combat this problem, keep your software package up to date, and if multifactor authentication is available, use it.
Videoconferencing packages should be secured so that a password is required to join a meeting. Setting a password will thwart conference bombing, also known as Zoombombing. This is the act of a threat agent guessing the meeting ID and joining an otherwise private conversation. This type of security concern isn't really your typical threat, but it is a liability to your organization if not avoided.
This chapter focused on the basics of scripting and introduced programming concepts such as scripting types, commenting, branch logic, loops, and use cases for scripting. Scripting is a somewhat newer objective for the CompTIA A+ objectives. It is a worthy skill for an A+ technician.
We also introduced you to the various remote access technologies, highlighting their advantages and weaknesses. In addition, we covered remote assistance tools, which allow a remote user to share their screen with you for support purposes.
The answers to the chapter review questions can be found in Appendix A.
xvar
with a value of 2?
xvar = 2
$xvar = 2
xvar = 2;
set /a xvar=2
do while
loopwhile
loopif
statementfor
loop.vbs
.js
bat
py
.vbs
.js
.bat
.py
.vbs
.sh
.bat
.py
chown
permissions must be set.execute
attribute must be set.chmod
permissions must be set..sh
must be added to the end of the script.mvar
with a value of 8?
$mvar = 8
mvar = 8
mvar = 8;
set /a mvar=8
//comment
'comment
REM comment
# comment
.js
.sh
.bat
.py
You will encounter performance‐based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
You have been assigned to write a PowerShell script that will find
other scripts in a user profile directory and all its subdirectories.
Which PowerShell variable should you use, since
%UserProfile%
is an environment variable and will not run
in PowerShell?
THE FOLLOWING COMPTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
In this chapter, we start by talking about safety, which includes your safety and the safety of your coworkers, as well as environmental concerns. Observing proper safety procedures can help prevent injury to you or to others.
Our discussion about the environment is two‐sided. The environment affects computers (via things like dust, sunlight, and water), but computers can also potentially harm the environment. We'll consider both sides as we move through this chapter.
Next, we will cover some legal aspects of operational procedures. These include licensing of software, protection of personally identifiable information, and incident response.
The proliferation of computers in today's society has created numerous jobs for technicians. Presumably that's why you're reading this book: you want to get your CompTIA A+ certification. Many others, who don't fix computers professionally, like to tinker with them as a hobby. Years ago, only the most expert users dared to crack the case on a computer. Often, repairing the system meant using a soldering iron. Today, thanks to the cheap parts, computer repair is not quite as involved. Regardless of your skill or intent, if you're going to be inside a computer, you always need to be aware of safety issues. There's no sense in getting yourself hurt or killed.
As a provider of a hands‐on service (repairing, maintaining, or upgrading someone's computer), you need to be aware of some general safety tips, because if you are not careful, you could harm yourself or the equipment. Clients expect you to solve their problems, not make them worse by injuring yourself or those around you. In the following sections, we'll talk about identifying safety hazards and creating a safe working environment.
Anything can be a potential safety hazard, right? Okay, maybe that statement is a bit too paranoid, but there are many things, both human‐made and environmental, that can cause safety problems when you're working with and around computers.
Perhaps the most important aspect of computers you should be aware of is that not only do they use electricity, but they also store electrical charge after they're turned off. This makes the power supply and the monitor pretty much off‐limits to anyone but a repairperson trained specifically for those devices. In addition, the computer's processor and various parts of the printer run at extremely high temperatures, and you can get burned if you try to handle them immediately after they've been in operation.
Those are just two general safety measures that should concern you. There are plenty more. When discussing safety issues with regard to PCs, let's break them down into four general areas:
As mentioned earlier, computers use electricity. And as you're probably aware, electricity can hurt or kill you. The first rule when working inside a computer is always to make sure that it's powered off. If you have to open the computer to inspect or replace parts (as you will with most repairs), be sure to turn off the machine before you begin. Leaving it plugged in is usually fine, and many times it is actually preferred because it grounds the equipment and can help prevent electrostatic discharge.
There's one exception to the power‐off rule: you don't have to power off the computer when working with hot‐swappable parts, which are designed to be unplugged and plugged back in when the computer is on. Most of these components have an externally accessible interface (such as USB devices or hot‐swappable hard drives), so you don't need to crack the computer case.
Do not take the issue of safety and electricity lightly. Removing the power supply from its external casing can be dangerous. The current flowing through the power supply normally follows a complete circuit; when your body breaks the circuit, your body becomes part of that circuit. Getting inside the power supply is the most dangerous thing you can do as an untrained technician.
The two biggest dangers with power supplies are burning or electrocuting yourself. These risks usually go hand in hand. If you touch a bare wire that is carrying current, you could get electrocuted. A large‐enough current passing through you can cause severe burns. It can also cause your heart to stop, your muscles to seize, and your brain to stop functioning. In short, it can kill you. Electricity always finds the best path to ground. And because the human body is basically a bag of saltwater (an excellent conductor of electricity), electricity will use us as a conductor if we are grounded.
Although it is possible to open a power supply to work on it, doing so is not recommended. Power supplies contain several capacitors that can hold lethal charges long after they have been unplugged! It is extremely dangerous to open the case of a power supply. Besides, power supplies are relatively inexpensive and are considered field replaceable units (FRUs). It would probably cost less to replace one than to try to fix it—and much safer.
In the late 1990s, a few mass computer manufacturers experimented with using open power supplies in their computers to save money. We don't know if any deaths occurred because of such incompetence, but it was definitely a very bad idea.
If you ever have to work on a power supply, for safety's sake you should discharge all capacitors within it. To do this, connect a resistor across the leads of the capacitor with a rating of 3 watts or more and a resistance of 100 ohms (Ω) per volt. For example, to discharge a 225‐volt capacitor, you would use a 22.5kΩ resistor (225V × 100Ω = 22,500Ω, or 22.5kΩ). You can also purchase a tool, known as a grounding pen or discharge pen, that will discharge any size capacitor. The tools look similar to a pen and have a wire that is connected to ground.
Other than the power supply, the most dangerous component to try to repair is a computer monitor—specifically, older‐style cathode‐ray tube (CRT) monitors. In fact, we recommend that you not try to repair monitors of any kind.
To avoid the extremely hazardous environment contained inside the monitor (it can retain a high‐voltage charge for hours after it's been turned off), take it to a certified monitor technician or television repair shop. The repair shop or certified technician will know the proper procedures for discharging the monitor, which involve attaching a resistor to the flyback transformer's charging capacitor to release the high‐voltage electrical charge that builds up during use. They will also be able to determine whether the monitor can be repaired or whether it needs to be replaced. Remember, the monitor works in its own extremely protected environment (the monitor case) and may not respond well to your desire to try to open it.
Even though we recommend not repairing monitors, the A+ exam may test your knowledge of the safety practices to use if you ever need to do so. If you have to open a monitor, you must first discharge the high‐voltage charge on it by using a high‐voltage probe. This probe has a very large needle, a gauge that indicates volts, and a wire with an alligator clip. Attach the alligator clip to a ground (usually the round pin on the power cord). Slip the probe needle underneath the high‐voltage cup on the monitor. You will see the gauge spike to around 15,000 volts and slowly reduce to 0 (zero). When it reaches 0, you may remove the high‐voltage probe and service the high‐voltage components of the monitor.
Working with liquid crystal display (LCD) monitors—or any device with a fluorescent or LCD backlight—presents a unique safety challenge. These types of devices require an inverter, which provides the high‐voltage, high‐frequency energy needed to power the backlight.
The inverter is a small circuit board installed behind the LCD panel that takes DC power and converts (inverts) it for the backlight. If you've ever seen a laptop or handheld device with a flickering screen or perpetual dimness, it was likely an inverter problem. Inverters store energy even when their power source is cut off, so they have the potential to discharge that energy if you mess with them. Be careful!
One component that people frequently overlook is the case. Cases are generally made of metal, and some computer cases have very sharp edges inside, so be careful when handling them. You can cut yourself by jamming your fingers between the case and the frame when you try to force the case back on. Also of particular interest are drive bays. Countless technicians have scraped or cut their hands on drive bays when trying in vain to plug a drive cable into the motherboard. You can cover particularly sharp edges with duct tape. Just make sure that you're covering only metal and nothing with electrical components on it.
If you've ever attempted to repair a printer, you might have thought that a little monster was inside, hiding all the screws from you. Besides missing screws, here are some things to watch out for when repairing printers:
When working with printers, we follow some pretty simple guidelines. If there's a messed‐up setting, paper jam, or ink or toner problem, we will fix it. If it's something other than that, we call a certified printer repairperson. The inner workings of printers can get pretty complex, and it's best to call someone trained to make those types of repairs.
Okay, we know that you're thinking, “What danger could a keyboard or mouse pose?” We admit that not much danger is associated with these components, but there are a couple of safety concerns that you should always keep in mind:
So far, we've talked about how electricity can hurt people, but it can also pose safety issues for computer components. One of the biggest concerns for components is electrostatic discharge (ESD). For the most part, ESD won't do serious damage to a person other than provide a little shock. But little amounts of ESD can cause serious damage to computer components, and that damage can manifest itself by causing computers to hang, reboot, or fail to boot at all. ESD happens when two objects of dissimilar charge come into contact with each other. The two objects exchange electrons in order to standardize the electrostatic charge between them. This charge can, and often does, damage electronic components.
When you shuffle your feet across the floor and shock your best friend on the ear, you are discharging static electricity into their ear. The lowest static voltage transfer that you can feel is around 3,000 volts; it doesn't electrocute you because there is extremely little current. A static transfer that you can see is at least 10,000 volts! Just by sitting in a chair, you can generate around 100 volts of static electricity. Walking around wearing synthetic materials can generate around 1,000 volts. You can easily generate around 20,000 volts simply by dragging your smooth‐soled shoes across a carpeted floor in the winter. (Actually, it doesn't have to be winter. This voltage can occur in any room with very low humidity—like a heated room in wintertime.)
It makes sense that these thousands of volts can damage computer components. However, a component can be damaged with less than 300 volts! This means that if a small charge is built up in your body, you could damage a component without realizing it.
The good news is that there are measures that you can implement to help contain the effects of ESD. The first and easiest item to implement is the antistatic wrist strap, also referred to as an ESD strap. We will look at the antistatic wrist strap, as well as other ESD prevention tools, in the following sections.
To use an ESD antistatic strap, you attach one end to an earth ground (typically, the ground pin on an extension cord) or the computer case and wrap the other end around your wrist. This strap grounds your body and keeps it at a zero charge. Figure 21.1 shows the proper way to attach an antistatic strap. There are several varieties of wrist straps available. The strap shown in Figure 21.1 uses an alligator clip and is attached to the computer case itself, whereas others use a banana clip that attaches to a grounding coupler.
FIGURE 21.1 ESD strap using an alligator clip
In order for an antistatic wrist strap to work properly, the computer must be plugged in but turned off. When the computer is plugged in, it is grounded through the power cord. When you attach yourself to it with the wrist strap, you are grounded through the power cord as well. If the computer is not plugged in, there is no ground, and any excess electricity on you will just discharge into the case, which is not good.
It is possible to damage a device by simply laying it on a benchtop. Therefore, you should have an ESD antistatic mat in addition to an ESD strap. An ESD mat drains excess charge away from any item coming in contact with it (see Figure 21.2). ESD mats are also sold as mouse/keyboard pads to prevent ESD charges from interfering with the operation of the computer. Many wrist straps can be connected to the mat, thus causing the technician and any equipment in contact with the mat to be at the same electrical potential and eliminating ESD. There are even ESD bootstraps and ESD floor mats, which are used to keep the technician's entire body at the same potential.
Antistatic bags, shown in Figure 21.3, are important tools to have at your disposal when servicing electronic components because they protect the sensitive electronic devices from stray static charges. By design, the static charges collect on the outside of these silver or pink bags, rather than on the electronic components.
FIGURE 21.2 Proper use of an ESD antistatic mat
FIGURE 21.3 An antistatic component bag
You can obtain the bags from several sources. The most direct way to acquire antistatic bags is to go to an electronics supply store and purchase them in bulk. Most supply stores have several sizes available. Perhaps the easiest way to obtain them, however, is simply to hold onto the ones that come your way. That is, when you purchase any new component, it usually comes in an antistatic bag. After you have installed the component, keep the bag. It may take you a while to gather a collection of bags if you take this approach, but eventually you will have a fairly large assortment.
We recommend that you include a grounding strap in your toolkit so that you're never without it. But we also realize that things happen and you might find yourself in a situation where you don't have your strap or an ESD mat. In such cases, you should self‐ground.
Self‐grounding is not as effective as using proper anti‐ESD gear, but it makes up for that with its simplicity. To self‐ground, make sure the computer is turned off but plugged in. Then touch an exposed (but not hot or sharp) metal part of the case. This will drain electrical charge from you. Better yet is if you can maintain constant contact with that metal part. That should keep you at the same bias as the case. Yes, it can be rather challenging to work inside a computer one‐handed, but it can be done.
Another preventive measure that you can take is to maintain the relative humidity at around 50 percent. Don't increase the humidity too far—to the point where moisture begins to condense on the equipment. It is best to check with the manufacturer of the equipment that you are protecting to find the optimal humidity. Also, use antistatic spray, which is available commercially, to reduce static buildup on clothing and carpets.
Vendors have methods of protecting components in transit from manufacture to installation. They press the pins of integrated circuits (ICs) into antistatic foam to keep all the pins at the same potential, as shown in Figure 21.4. In addition, most circuit boards are shipped in antistatic bags, as discussed earlier.
FIGURE 21.4 Antistatic foam
At the very least, you should be mindful of the dangers of ESD and take steps to reduce its effects. Beyond that, you should educate yourself about those effects so that you know when ESD is becoming a major problem.
When compared to the other dangers that we've discussed in this chapter, electromagnetic interference (EMI), also known as radio‐frequency interference (RFI) when it's in the same frequency range as radio waves, is by far the least dangerous. EMI really poses no threats to you in terms of bodily harm. What it can do is make your equipment or network malfunction.
EMI is an unwanted disturbance caused by electromagnetic radiation generated by another source. In other words, some of your electrical equipment may interfere with other equipment. Here are some common sources of interference:
Computers should always be operated in cool environments and away from direct sunlight and water sources. This is also true when you're working on computers. We know that heat is an enemy of electrical components. Dirt and dust act as great insulators, trapping heat inside components. When components run hotter than they should, they have a greater chance of breaking down faster.
It pretty much should go without saying, but we'll say it anyway: water and electricity don't mix. Keep liquids away from computers. If you need your morning coffee while fixing a PC, make sure that the coffee cup has a tight and secure lid.
Benjamin Franklin was quoted as saying, “An ounce of prevention is worth a pound of cure.” That sage advice applies to a lot in life and certainly to computer safety. Knowing how to work with and handle computer equipment properly is a good start. It's also important to institutionalize and spread the knowledge, and to make sure that your company has the proper policies and procedures in place to ensure everyone's safety.
We've already talked about some of the hazards posed by computer parts. Many times it's the more mundane tasks that get us, though, such as moving stuff around. One of the most common ways that IT employees get hurt is by moving equipment in an improper way. Changing the location of computers is a task often completed by IT personnel. You can avoid injury by moving things the right way.
To ensure your personal safety, here are some important techniques to consider before moving equipment:
The muscles in the lower back aren't nearly as strong as those in the legs or other parts of the body. Whenever lifting, you want to reduce the strain on those lower‐back muscles as much as possible. If you want, use a back belt or brace to help you maintain the proper position while lifting.
If you believe that the load is too much for you to carry, don't try to pick it up. Get assistance from a coworker. Another great idea is to use a cart. It will save you trips if you have multiple items to move, and it saves you the stress of carrying components.
If you do use a cart to move the equipment, make sure that you do not overload the cart. Know the cart's weight limitation and estimate the weight of the equipment you will be hauling. Most small, commercial service carts will hold around 100–200 pounds. If you're moving a battery backup unit that requires two people to lift, you may be pushing the limitations of the cart. Also make sure the load is not top heavy. Always place the heaviest items on the lower shelves of a cart.
When moving loads, always be aware of your surrounding environment. Before you move, scout out the path to see whether there are any trip hazards or other safety concerns, such as spills, stairs, uneven floors (or ripped carpet), tight turns, or narrow doorways.
A big part of creating a safe working environment is having the right tools available for the job. There's no sense implementing a sledgehammer solution to a ball‐peen hammer problem. Using the wrong tool might not help fix the problem, and it could very possibly hurt you or the computer in the process.
Most of the time, computers can be opened and devices removed with nothing more than a simple screwdriver. But if you do a lot of work on PCs, you'll definitely want to have additional tools on hand.
Computer toolkits are readily available on the Internet or at any electronics store. They come in versions from inexpensive (under $10) kits that have around 10 pieces to kits that cost several hundred dollars and have more tools than you will probably ever need. Figure 21.5 shows an example of a basic 13‐piece PC toolkit. All of these tools come in a handy zippered case, so it's hard to lose them.
Figure 21.5 shows the following tools, from left to right:
FIGURE 21.5 PC toolkit
A favorite of ours is the three‐claw retriever, because screws like to fall and hide in tiny places. While most of these tools are incredibly useful, an IC extractor probably won't be. In today's environment, it's rare to find an IC that you can extract, much less find a reason to extract one.
The following sections look at some of the tools of the PC troubleshooting trade.
Every PC technician worth their weight in pocket protectors needs to have a screwdriver—at least one. There are three major categories of screwdrivers: flat‐blade, Phillips, and Torx. In addition, there are devices that look like screwdrivers, except that they have a hex‐shaped indented head on them. They're called hex drivers, and they belong to the screwdriver family.
When picking a screwdriver, always keep in mind that you want to match the size of the screwdriver head to the size of the screw. Using a screwdriver that's too small will cause it to spin inside the head of the screw, stripping the head of the screw and making it useless. If the screwdriver is too large, on the other hand, you won't be able to get the head in far enough to generate any torque to loosen the screw. Of course, if the screwdriver is way too big, it won't even fit inside the screw head at all. Common sizes for Phillips‐head screws are 000, 00, 0, 1, 2, and 3. When you are dealing with Torx screws, the two most common sizes are T‐10 and T‐15.
We've already talked about these, but they are important, so we'll mention them again. An antistatic wrist strap is essential to any PC technician's arsenal. They don't typically come with smaller PC toolkits, but you should always have one or two handy.
PC techs also commonly carry the following tools:
Repairing a computer isn't often the cause of an electrical fire. However, you should know how to extinguish such a fire properly. Four major classes of fire extinguishers are available, one for each type of flammable substance: A, for wood and paper fires; B, for flammable liquids; C, for electrical fires; and D (metal powder or NaCl [salt]), for flammable metals, such as phosphorus and sodium.
The most popular type of fire extinguisher today is the multipurpose, or ABC‐rated, extinguisher. It contains a dry chemical powder (for example, sodium bicarbonate, monoammonium phosphate) that smothers the fire and cools it at the same time. For electrical fires (which may be related to a shorted‐out wire in a power supply), make sure the fire extinguisher will work for Class C fires. If you don't have an extinguisher that is specifically rated for electrical fires (Class C), you can use an ABC‐rated extinguisher.
The topic of electrical fire safety is a very broad subject. The best
prevention method for electrical fires is to follow building codes. If
building codes are not followed, your organization could be fined. Every
state and locality has a different building code that you should reference with new structures and alterations
to existing structures. In addition to state and local building codes,
you also need to reference national building codes related to fire
prevention. One organization that supplies reference to these codes is
the National Fire Protection Association (NFPA) www.nfpa.org/Codes-and-Standards
.
The U.S. Fire Administration (www.usfa.fema.gov/prevention
)
is a government organization that provides general training and
prevention material.
We've already talked about some work environment issues. For example, don't put a computer next to the break‐room sink, and keep computers out of direct sunlight (even if the desk location is great). A few other things to watch out for are trip hazards, atmospheric conditions, and high‐voltage areas.
Cables are a common cause of tripping. If at all possible, run cables through drop ceilings or through conduits to keep them out of the way. If you need to lay a cable through a trafficked area, use a floor cable guard to keep the cables in place and safe from crushing. Floor guards come in a variety of lengths and sizes (for just a few cables or for a lot of cables). Figure 21.6 shows a cable guard.
FIGURE 21.6 Floor cable guard
Another useful tool to keep cables under control is a cable tie (see Figure 21.7). It's simply a plastic tie that holds two or more cables together. Cable ties come in different sizes and colors, so you're bound to find one that suits your needs.
FIGURE 21.7 Cable ties
Exercise 21.1 is a simple exercise that you can modify and use as needed. Its purpose is to illustrate common office hazards that you may not have realized were there.
Atmospheric conditions you need to be aware of include areas with high static electricity or excessive humidity. Being aware of these conditions is especially important for preventing electrostatic discharge, as we've already discussed.
Finally, be aware of high‐voltage areas. Computers do need electricity to run, but only in measured amounts. Running or fixing computers in high‐voltage areas can cause problems for the electrical components and problems for you if something should go wrong.
The Occupational Safety and Health Act states that every working American has the right to a safe and healthy work environment. To enforce the act, the Occupational Safety and Health Administration (OSHA) was formed. OSHA covers all private‐sector employees and U.S. Postal Service workers. Public‐sector employees are covered by state programs, and federal employees are covered under a presidential executive order. In a nutshell, OSHA requires employers to “provide a workplace that is free of recognized dangers and hazards.”
There are three overarching criteria to a safe work environment:
The following sections explore specific responsibilities and how to create a safe work environment plan.
Maintaining workplace safety is the responsibility of employers as well as employees. Here are some of the important responsibilities of employers:
It's also the responsibility of the employee to help maintain a safe work environment. Specifically, employees are charged with the following tasks:
As you can see, employers and employees need to work together to keep the workplace safe. It is illegal for an employee to be punished in any way for exercising their rights under the Occupational Safety and Health Act.
We recommend that your company create and follow a workplace safety plan. Having a safety plan can help avoid accidents that result in lost productivity, equipment damage, and employee injury or death.
A good safety plan should include the following elements:
It might seem like a laundry list of items to consider, but a good safety program needs to be holistic in nature to be effective.
Many companies also incorporate rules against drug or alcohol use in their safety and health plans. Specifically, employees are not allowed to come to work if under the influence of alcohol or illegal drugs. Employees who do come to work under the influence may be subject to disciplinary action up to and including termination of employment.
After your safety plan has been created, you need to ensure that all employees receive necessary training. Have each employee sign a form at the end of the training to signify that they attended, and keep the forms in a central location (such as with or near the official safety policy). In addition to the training record, you should make available and keep records of the following:
Safety rules and regulations will work only if they have the broad support of management from the top down. Everyone in the organization needs to buy into the plan; otherwise, it won't be a success. Make sure that everyone understands the importance of a safe work environment, and make sure that the culture of the company supports safety in the workplace.
Accidents happen. Hopefully, they don't happen too often, but we know that they do. Details on how to handle accidents are a key part of any safety plan so that when an accident does happen, you and your coworkers know what to do. Good plans should include steps for handling a situation as well as reporting an incident. We will cover incident response in more detail later in this chapter. Two major classifications of accidents are environmental and human.
When related to computers, environmental accidents typically come in one of two forms: electricity or water. Too much electricity is bad for computer components. If lightning is striking in your area, you run a major risk of frying computer parts. Even if you have a surge protector, you could still be at risk.
The best bet in a lightning storm is to power off your equipment and unplug it from outlets. Make the lightning have to come inside a window and hit your computer directly in order to fry it.
Water is obviously also bad for computer components. If there is water in the area and you believe that it will come in contact with your computers, it's best to get the machines powered off as quickly as possible. If components are not powered on but get wet, they may still work after thoroughly drying out. But if they're on when they get wet, they're likely cooked. Water + electronic components = bad. Water + electronic components + electricity = really bad.
Many server rooms have raised floors. Although this serves several purposes, one is that equipment stored on the raised floor is less susceptible to water damage if flooding occurs.
Human nature dictates that we are not infallible, so, of course, we're going to make mistakes and have accidents. The key is to minimize the damage caused when an accident happens.
If a chemical spill occurs, make sure that the area gets cordoned off as soon as possible. Then clean up the spill. The specific procedure on how to do that depends on the chemical, and that information can be found on material safety data sheets (MSDSs). Depending on the severity of the spill or the chemical released, you may also need to contact the local authorities. Again, the MSDS should have related information. We cover MSDSs in more detail later in the chapter.
Physical accidents are more worrisome. People can trip on wires and fall, cut or burn themselves repairing computers, and incur a variety of other injuries. Computer components can be replaced, but that's not always true of human parts (and it's certainly not true of lives). The first thing to keep in mind is always to be careful and use common sense. If you're trying to work inside a computer case and you see sharp metal edges inside the case, see whether the metal (or component on which you are working) can be moved to another location until you finish. Before you stick your hand into an area, make sure that nothing is hot or could cut you.
When an accident does happen (or almost happens), be sure to report it. Many companies pay for workers' compensation insurance. If you're injured on the job, you're required to report the incident, and you might also get temporary payments if you are unable to work because of the accident. Also, if the accident was anything but minor, seek medical attention. Just as victims in auto accidents might not feel pain for a day or two, victims in other physical accidents might be in the same position. If you never report the accident, insurance companies may find it less plausible that your suffering was work related.
It is estimated that more than 25 percent of all the lead (a poisonous substance) in landfills today comes from consumer electronics components. Because consumer electronics (televisions, DVRs, Blu‐ray players, stereos) contain hazardous substances, many states require that they be disposed of as hazardous waste. Computers are no exception. Monitors contain several carcinogens and phosphors as well as mercury and lead. The computer itself may contain several lubricants and chemicals as well as lead. Printers contain plastics and chemicals, such as those in toners and inks, which are also hazardous. All of these items should be disposed of properly.
Remember all those 386 and 486 computers that came out in the late 1980s and are now considered antiques? Maybe you don't, but there were millions of them. Where did they all go? Is there an Old Computers Home somewhere that is using these computer systems for good purposes, or are they lying in a junkyard somewhere? Or could it be that some folks just cannot let go and have a stash of old computer systems and computer parts in the dark depths of their basements? Regardless of where they are today, all of those old components have one thing in common: they are hazardous to the environment.
On the flip side, the environment is also hazardous to our computers. We've already talked about how water and computers don't mix well, and that's just the beginning. Temperature, humidity, and air quality can have dramatic effects on a computer's performance. And we know that computers require electricity; too much or too little can be a problem.
With all these potential issues, you might find yourself wondering, “Can't we all just get along?” In the following sections, we will talk about how to make our computers and the environment coexist as peacefully as possible.
Some of our computers sit in the same dark, dusty corner for their entire lives. Other computers are carried around, thrown into bags, and occasionally dropped. Either way, the physical environment in which our computers exist can have a major effect on how long they last. It's smart to inspect the physical environment periodically in order to ensure that there are no working hazards. Routinely cleaning components will also extend their useful life, and so will ensuring that the power supplying them is maintained.
As electronics, computers need a power source. Laptops can free you from your power cord leash for a while, but only temporarily. Power is something that we often take for granted until we lose it, and then we twiddle our thumbs and wonder what people did before the Internet. Most people realize that having too much power (a power surge) is a bad thing because it can fry electronic components. Having too little power, such as when a blackout occurs, can also wreak havoc on electrical circuits.
Obviously, if we lose power, the equipment stops working. We have all experienced blackouts, but there are many other electrical problems we can encounter that will affect our network equipment and interrupt operations:
FIGURE 21.8 A simple power strip
Power strips come in all shapes and sizes and are convenient for plugging multiple devices into one wall outlet. Most of them even have an on/off switch so that you can turn all the devices on or off at the same time. Figure 21.8 shows a simple power strip.
Don't make the mistake of thinking that power strips will protect you from electrical surges, though. If you get a strong power surge through one of these $10 devices, the strip and everything plugged into it can be fried. Some people like to call power strips “surge protectors” or “surge suppressors,” but power strips do nothing to protect against or suppress surges.
Devices that actually attempt to keep power surges at bay are called surge protectors. They often look similar to a power strip, so it's easy to mistake them for each other, but protectors are more expensive, usually starting in the $25 range. Surge protectors have a fuse inside them that is designed to blow if it receives too much current and not to transfer the current to the devices plugged into it. Surge protectors may also have plug‐ins for RJ‐11 (phone), RJ‐45 (Ethernet), and BNC (coaxial cable) connectors.
Figure 21.9 shows a surge protector, which doesn't look too different from a simple power strip. The key is to read the packaging and the labels on the product. Make sure that the device will protect your electronics from electrical surges. There is usually a printed specification of 200V to 500V. This indicates how much of a surge in voltage the surge protector can handle.
The best device for power protection is called an uninterruptible power supply (UPS). These devices can be as small as a brick, like the one shown in Figure 21.10, or as large as an entire server rack. Some just have a few indicator lights, while others have LCD displays that show status and menus and that come with their own management software.
FIGURE 21.9 A surge protector
Inside the UPS are one or more batteries and fuses. Much like a surge suppressor, a UPS is designed to protect everything that's plugged into it from power surges. UPSs are also designed to protect against power sags and even power outages. Energy is stored in the batteries, and if the power fails, the batteries can power the computer for a period of time so that the administrator can then safely power it down. Many UPSs and operating systems will also work together to power down a system automatically (and safely) or switch it to UPS power. These types of devices may be overkill for Uncle Bob's machine at home, but they're critically important fixtures in server rooms.
FIGURE 21.10 An uninterruptible power supply
UPSs can accommodate several different devices; the number depends on the size and power rating. The model shown in Figure 21.11 has four plugs for battery backup and surge protection, and another four outlets for surge protection only. Two of each of the four outlets are controlled by a master switch on the unit.
FIGURE 21.11 The back of a UPS
The UPS should be checked periodically as part of the preventive maintenance routine to make sure its battery is operational. Most UPSs have a test button you can press to simulate a power outage. You will find that batteries wear out over time, and you should replace the battery in the UPS every couple of years to keep the UPS dependable.
Sometimes we can't help how clean—or unclean—our environments are. A computer in an auto body shop is going to face dangers that one in a receptionist's office won't. Still, there are things that you can do to help keep your systems clean and working well. We're going to break these concepts down into two parts. First, we'll look at common issues you should be aware of, and then we'll discuss proper cleaning methods.
In a nutshell, water and other liquids, dirt, dust, unreliable power sources, and heat and humidity aren't good for electronic components. Inspect your environment to eliminate as many of these risks as possible. Leaving your laptop running outside in a rainstorm? Not such a good idea. (Been there, done that.)
Computers in manufacturing plants are particularly susceptible to environmental hazards. One technician reported a situation with a computer that had been used on the manufacturing floor of a large equipment manufacturer. The computer and keyboard were covered with a black substance that would not come off. (It was later revealed to be a combination of paint mist and molybdenum grease.) There was so much diesel fume residue in the power supply fan that it would barely turn. The insides and components were covered with a thin, greasy layer of muck. To top it all off, the computer smelled terrible!
Despite all this, the computer still functioned. However, it was prone to reboot itself every now and again. The solution was (as you may have guessed) to clean every component thoroughly and replace the power supply. The muck on the components was able to conduct a small current. Sometimes, that current would go where it wasn't wanted and zap—a reboot. In addition, the power supply fan is supposed to partially cool the inside of the computer. In this computer, the fan was detrimental to the computer because it got its cooling air from the shop floor, which contained diesel fumes, paint fumes, and other chemical fumes. Needless to say, those fumes aren't good for computer components.
Computers and humans have similar tolerances to heat and cold, although computers like the cold better than we do. In general, anything comfortable to us is comfortable to a computer. They don't, however, require food or drink (except maybe a few RAM chips now and again)—keep those away from the computer.
Computers need lots of clean moving air to keep them functioning. One way to ensure that the environment has the least possible effect on your computer is always to leave the blanks in the empty slots on the back of your box. These pieces of metal are designed to keep dirt, dust, and other foreign matter out of the inside of the computer. They also maintain proper airflow within the case to ensure that the computer does not overheat.
You can also purchase computer enclosures to keep the dust out—just make sure that they allow for proper air ventilation. Many times these devices use air filters in much the same way a furnace or a car engine does.
The cleanliness of a computer is extremely important. Buildup of dust, dirt, and oils can prevent the various mechanical parts of a computer from operating. Cleaning them with the right cleaning compounds is equally important. Using the wrong compounds can leave residue behind that is more harmful than the dirt that you are trying to remove.
Most computer cases and monitor cases can be cleaned by using mild soapy water on a clean, lint‐free cloth. Do not use any kind of solvent‐based cleaner on monitor screens, because doing so can cause discoloration and damage to the screen surface. Most often, a simple dusting with a damp cloth (moistened with water) will suffice. Make sure that the power is off before you put anything wet near a computer. Dampen (don't soak) a cloth in mild soap solution and wipe the dirt and dust from the case. Then wipe the moisture from the case with a dry, lint‐free cloth. Anything with a plastic or metal case can be cleaned in this manner.
Additionally, if you spill anything on a keyboard, you can clean it by soaking it in distilled, demineralized water and then drying it off. The extra minerals and impurities have been removed from this type of water, so it will not leave any traces of residue that might interfere with the proper operation of the keyboard after cleaning. The same holds true for the keyboard's cable and its connector.
The electronic connectors of computer equipment, on the other hand, should never touch water. Instead, use a swab moistened in distilled, denatured isopropyl alcohol (also known as electronics or contact cleaner and found in electronics stores) to clean contacts. Doing so will take oxidation off the copper contacts.
Finally, the best way to remove dust and debris from the inside of the computer is to use compressed air (not a vacuum). Compressed air can be more easily directed and doesn't easily produce ESD damage, as a vacuum could. Simply blow the dust from inside the computer by using a stream of compressed air. Make sure to do this outside so that you don't blow dust all over your work area or yourself. Also be sure to wear safety goggles and use an air mask. If you need to use a vacuum, a nonstatic computer vacuum that is specially made for cleaning computer components is recommended. Their nozzles are grounded to prevent ESD from damaging the components of the computer.
One unique challenge when cleaning printers is spilled toner. It sticks to everything and should not be inhaled—it's a carcinogen. Use an electronics vacuum that is designed specifically to pick up toner. A typical vacuum's filter isn't fine enough to catch all the particles, so the toner may be circulated into the air. Normal electronics vacuums may melt the toner instead of picking it up.
Table 21.1 summarizes the most common cleaning tools and their uses.
Tool | Purpose |
---|---|
Computer vacuum | Sucking up dust and small particles |
Mild soap and water | Cleaning external computer and monitor cases |
Demineralized water | Cleaning keyboards or other devices that have contact points that are not metal |
Denatured isopropyl alcohol | Cleaning metal contacts, such as those on expansion cards |
Monitor wipes | Cleaning monitor screens. Do not use window cleaner. |
Lint‐free cloth | Wiping down anything. Don't use a cloth that will leave lint or other residue behind. |
Compressed air | Blowing dust or other particles out of hard‐to‐reach areas |
TABLE 21.1 Computer cleaning tools
Periodically cleaning equipment is one of the easiest ways to prevent costly repairs, but it's also one of the most overlooked tasks. We're often too busy solving urgent crises to deal with these types of tasks. If possible, block out some time every week for the sole purpose of cleaning your equipment.
Each piece of computer equipment that you purchase offers a manual, usually found online. Detailed instructions on the proper handling and use of that component can be found in the manual. In addition, many manuals give information on how to open the device for maintenance or on whether you should even open the device at all.
If you have the luxury of having paper manuals, don't throw them away. Keep a drawer of a file cabinet specifically for hardware manuals (and keep it organized). You can always look up information on the Internet as well, but having paper manuals on hand is useful for two reasons:
In the following sections, we'll cover two topics: using safety documentation and following safety and disposal procedures.
In addition to your product manuals, another place to find safety information is in material safety data sheets (MSDSs). MSDSs include information such as physical product data (boiling point, melting point, flash point, and so forth), potential health risks, storage and disposal recommendations, and spill/leak procedures. With this information, technicians and emergency personnel know how to handle the product as well as respond in the event of an emergency.
MSDSs are typically associated with hazardous chemicals. Indeed, chemicals do not ship without them. MSDSs are not intended for consumer use; rather, they're made for employees or emergency workers who are consistently exposed to the risks of the particular product.
The U.S. Occupational Safety and Health Administration (OSHA) mandates MSDSs only for the following products:
One of the interesting things about MSDSs is that OSHA does not require companies to distribute them to consumers. Most companies will be happy to distribute one for their products, but they are under no obligation to do so.
If employees are working with materials that have MSDSs, those employees are required by OSHA to have “ready access” to MSDSs. This means that employees need to be able to get to the sheets without having to fetch a key, contact a supervisor, or submit a procedure request. Remember the file cabinet drawer that you have for the hardware manuals? MSDSs should also be kept readily accessible. Exercise 21.2 helps you find your MSDSs and get familiar enough with them to find critical information.
At this point, you might stop to think for a second, “Do computers really come with hazardous chemicals? Do I really need an MSDS?” Consider this as an example: oxygen. Hardly a dangerous chemical, considering we need to breathe it to live, right? In the atmosphere, oxygen is at 21 percent concentration. At 100 percent concentration, oxygen is highly flammable and can even spontaneously ignite some organic materials. In that sense, and in the eyes of OSHA, nearly everything can be a dangerous chemical.
The sections within an MSDS are the same regardless of the product, but the information inside each section changes. Here is a truncated sample MSDS for ammonium hydrogen sulfate:
**** MATERIAL SAFETY DATA SHEET ****
Ammonium Hydrogen Sulfate
90009
**** SECTION 1—CHEMICAL PRODUCT AND COMPANY IDENTIFICATION ****
MSDS Name: Ammonium Hydrogen Sulfate
Catalog Numbers:
A/5400
Synonyms:
Sulfuric acid, monoammonium salt; Acid ammonium sulfate; Ammonium
acid sulfate.
**** SECTION 2—COMPOSITION, INFORMATION ON INGREDIENTS ****
CAS# Chemical Name % EINECS#
7803–63–6 Ammonium hydrogen sulfate 100 % 232–265–5
Hazard Symbols: C
Risk Phrases: 34
**** SECTION 3—HAZARDS IDENTIFICATION ****
EMERGENCY OVERVIEW
Causes burns. Corrosive. Hygroscopic (absorbs moisture from the air).
Potential Health Effects
Skin:
Causes skin burns.
Ingestion:
May cause severe gastrointestinal tract irritation with nausea, vomiting,
and possible burns.
Inhalation:
Causes severe irritation of upper respiratory tract with coughing, burns,
breathing difficulty, and possible coma.
**** SECTION 4—FIRST-AID MEASURES ****
Skin:
Get medical aid immediately. Immediately flush skin with plenty of water for at
least 15 minutes while removing contaminated clothing and shoes.
Ingestion:
Do not induce vomiting. If victim is conscious and alert, give 2–4 cupfuls
of milk or water. Never give anything by mouth to an unconscious person. Get
medical aid immediately.
Inhalation:
Get medical aid immediately. Remove from exposure and move to fresh air
immediately.
If not breathing, give artificial respiration. If breathing is difficult, give
oxygen.
**** SECTION 5—FIREFIGHTING MEASURES ****
**** SECTION 6—ACCIDENTAL RELEASE MEASURES ****
General Information: Use proper personal protective equipment as indicated
in Section 8.
**** SECTION 7—HANDLING and STORAGE ****
Handling:
Wash thoroughly after handling. Wash hands before eating. Use only
in a well-ventilated area. Do not get in eyes, on skin, or on clothing. Do not
ingest or inhale.
Storage:
Store in a cool, dry place. Keep container closed when not in use.
**** SECTION 8—EXPOSURE CONTROLS, PERSONAL PROTECTION ****
Engineering Controls:
Use adequate general or local exhaust ventilation to keep airborne
concentrations below the permissible exposure limits.
Respirators:
Follow the OSHA respirator regulations found in 29 CFR 1910.134 or European
Standard EN 149. Always use a NIOSH or European Standard EN 149 approved
respirator when necessary.
**** SECTION 9—PHYSICAL AND CHEMICAL PROPERTIES ****
Physical State: Solid
Color: White
Odor: Not available
**** SECTION 10—STABILITY AND REACTIVITY ****
Chemical Stability:
Stable under normal temperatures and pressures.
Conditions to Avoid:
Incompatible materials, dust generation, exposure to moist air or water.
**** SECTION 11—TOXICOLOGICAL INFORMATION ****
RTECS#:
CAS# 7803–63–6: BS4400500
**** SECTION 12—ECOLOGICAL INFORMATION ****
**** SECTION 13—DISPOSAL CONSIDERATIONS ****
Products which are considered hazardous for supply are classified as Special
Waste, and the disposal of such chemicals is covered by regulations which may
vary according to location. Contact a specialist disposal company or the local
waste regulator for advice. Empty containers must be decontaminated before
returning for recycling.
**** SECTION 14—TRANSPORT INFORMATION ****
**** SECTION 15—REGULATORY INFORMATION ****
European/International Regulations
European Labeling in Accordance with EC Directives
Hazard Symbols: C
Risk Phrases:
R 34 Causes burns.
Safety Phrases:
S 26 In case of contact with eyes, rinse immediately with plenty of water and
seek medical advice. S 28 After contact with skin, wash immediately with…
**** SECTION 16—ADDITIONAL INFORMATION ****
MSDS Creation Date: 6/23/2004 Revision #0 Date: Original.
It is relatively easy to put old components away, thinking that you might be able to put them to good use again someday, but doing so is not realistic. Most computers are obsolete as soon as you buy them. And if you have not used them recently, your old computer components will more than likely never be used again.
We recycle cans, plastic, and newspaper, so why not recycle computer equipment? The problem is that most computers contain small amounts of hazardous substances. Some countries are exploring the option of recycling electrical machines, but not all have enacted appropriate measures to enforce their proper disposal.
Regardless of manufacturer or community programs, we can take proactive steps, as consumers and caretakers of our environment, to promote the proper disposal of computer equipment:
www.epa.gov
.www.msds.com
to see if what
you are disposing has an MSDS. These sheets contain information about
the toxicity of a product and whether it can simply be disposed of as
trash. They also contain lethal‐dose information.Check out the Internet for possible waste disposal sites. Table 21.2 lists a few websites that we came across that deal with the disposal of used computer equipment. A quick web search will likely locate some in your area.
Site name | Web address |
---|---|
Goodwill | www.goodwillsc.org/donate/computers |
Staples | www.staples.com/sbd/cre/marketing/sustainability-center/recycling-services/electronics |
U.S. EPA | www.epa.gov/recycle/electronics-donation-and-recycling |
Tech Dump | www.techdump.org |
TABLE 21.2 Computer recycling websites
Following the general rule of thumb of recycling your computer components and consumables is a good way to go. In the following sections, we'll look at four classifications of computer‐related components and the proper disposal procedures for each.
The EPA estimates that more than 350 million batteries are purchased annually in the United States. One can only imagine what the worldwide figure is. Batteries contain several heavy metals and other toxic ingredients, including alkaline, mercury, lead acid, nickel cadmium, and nickel metal hydride.
When batteries are thrown away and deposited into landfills, the heavy metals inside them will find their way into the ground. From there, they can pollute water sources and eventually find their way into the supply of drinking water. In 1996, the United States passed the Mercury‐Containing and Rechargeable Battery Management Act (aka the Battery Act) with two goals: to phase out the use of mercury in disposable batteries and to provide collection methods and recycling procedures for batteries.
Five types of batteries are most commonly associated with computers and handheld electronic devices: alkaline, nickel‐cadmium (NiCd), nickel‐metal hydride (NiMH), lithium‐ion (Li‐ion), and button cell.
You may have noticed a theme regarding the disposal of batteries: recycling. Many people just throw batteries in the trash and don't think twice about it. However, there are several laws in the United States that require the recycling of many types of batteries, and recycling does indeed help keep the environment clean. For a list of recycling centers in your area, use your local Yellow Pages (under Recycling Centers) or search the Internet.
Computer monitors (CRT monitors, not LCDs) are big and bulky, so what do you do when it's time to get rid of them? As previously mentioned, monitors contain capacitors that are capable of retaining a lethal electric charge after the monitors have been unplugged. You wouldn't want anyone to set off the charge accidentally and die. But what we didn't mention earlier, which is important now, is that most CRT monitors contain high amounts of lead. Most monitors contain several pounds of lead, in fact. Lead is very dangerous to humans and the environment and must be dealt with carefully. Other harmful elements found in CRTs include arsenic, beryllium, cadmium, chromium, mercury, nickel, and zinc.
If you have to dispose of a monitor, contact a computer‐recycling firm. It's best to let professional recyclers handle the monitor for you.
Toner cartridges should be recycled as well. PC recycling centers will take old toner cartridges and properly dispose of them. The toner itself is a carcinogen, and the cartridges can contain heavy metals that are bad for the environment.
Toner cartridges are valuable to companies that refurbish and refill these cartridges. It's actually big business to refill these expensive cartridges, and it's environmentally responsible. Most toner is the same for all types and models of laser printers, so a toner refurbishing center will refill and test these cartridges. Then these companies sell the cartridges for a fraction of the price of new toner cartridges. If a new toner cartridge is installed, the old toner cartridge is boxed up and sent back. Even if your organization doesn't contract with one of these services, they would be happy to take the old cartridges off your hands and keep them out of the trash.
Cell phones and tablets are considered disposable units, with the average life expectancy of two to four years. Their popularity has outpaced mobile computing and the desktop computer market. These mobile devices are extremely small and fit neatly into the trash. However, mobile devices contain the same toxic metals and chemical compositions as their larger cousins. Every mobile device has a circuit board that contains lead, batteries that contain other heavy metals, and bodies that contain plastics.
These devices should be recycled responsibly. Many big‐tag retailers have an anonymous recycling drop bin, where you can recycle a mobile device. There are even automated kiosks where you can get money for a defective device. You simply tell it what is wrong with the device, the device's make and model, and its condition. The kiosk will then offer you a few dollars to recycle it. The company that owns and operates the kiosk refurbishes the device and resells it for a fraction of the cost of a new one.
Nearly every chemical solvent that you encounter will have a corresponding MSDS. On the MSDS for a chemical, you will find a section detailing the proper methods for disposing of it. Chemical solvents were not designed to be released into the environment, because they could cause significant harm to living organisms if they're ingested. If in doubt, contact a local hazardous materials handler to find out the best way to dispose of a particular chemical solvent.
Cans are generally made from aluminum or other metals, which are not biodegradable. It's best always to recycle these materials. If the cans were used to hold a chemical solvent or otherwise hazardous material, contact a hazardous materials disposal center instead of a recycling center.
Many of the operational procedures that we've discussed up to this point have been about safety—yours, your computer equipment’s, and the environment’s. We've also touched on regulations, as in always be sure to comply with local government regulations. In the following sections, we focus more on the legal side of things. Not understanding legal requirements is not a justifiable defense in a court of law. Considering that IT professionals often deal with software licensing and personally identifiable information, or sometimes encounter prohibited activity or have to deal with a security incident, you should understand the general principles related to these concepts.
This is a situation that no one really wants to deal with, but it happens more often than we would care to admit: a computer you are fixing has content on it that is inappropriate or illegal, or you see someone on your network performing an action that is against policy or laws. How you respond in such a situation can have a significant bearing on your career, the other people involved, and, depending on the situation, the well‐being of your organization. The key to dealing with prohibited content or activity is to have a comprehensive policy in place that covers appropriate behavior. After that, it's a matter of executing the proper steps per the plan when something happens.
Situations involving prohibited content or activities are not easy to address. The accused person might get angry or confrontational, so it's important always to have the right people there to help manage and defuse the situation. If you feel that the situation is severe enough to worry about your own personal safety, don't be afraid to involve the police. While the situation needs to be handled, there's no sense in putting yourself in direct danger to do so.
Creating a policy is the most important part of dealing with prohibited content or actions. Without a policy in place that specifically defines what is and what isn't allowed, and what actions will be taken when a violation of the policy occurs, you don't really have a leg to stand on when a situation happens.
What is contained in the policy depends on the organization for which you work. Generally speaking, if something violates an existing federal or local law, it probably isn't appropriate for your network either. Many organizations also have strict policies against the possession of pornographic or hate‐related materials on the organization's property. Some go further than that, banning personal files such as downloaded music or movies on work computers. Regardless of what is on your policy, always ensure that you have buy‐in from very senior management so that the policy will be considered valid.
Here are some specific examples of content that might be prohibited:
A good policy will also contain the action steps to be taken if prohibited content or activity is spotted. For example, what should you do if you find porn on someone's work laptop?
The policy should explicitly outline the punishment for performing specific actions or possessing specific content. The appropriate penalty may very well be based on the type of content found. Something that is deemed mildly offensive might result in a verbal or written warning for the first offense and a more severe sentence for the second offense. If your company has a zero‐tolerance policy, then employees may be terminated and possibly subject to legal action.
Finally, after the policy has been established, it's critical to ensure that all employees are aware of it and have proper training. In fact, it's highly recommended that you have all employees sign a disclosure saying they have read and understand the policy, and that the signed document be kept in their human resources file. Many organizations also require that employees review the policy yearly and re‐sign the affidavit as well.
If you have your policy in place, then your incident response plan should be relatively scripted. It might not be easy to deal with, but the steps you should take should be outlined for you. Professionalism should be maintained during the incident. This is a good time to remind you that people will be looking at your reaction as well as your actions, and professionalism will define how you dealt with the incident. If you see prohibited content and start giggling and walk away, that probably doesn't reflect well on you. Always remember that others are watching you. The specific steps that you take will depend on your policy. The following sections describe the best practices for dealing with security incidents.
An incident can be detected in several different ways. The preceding section used the example of pornographic content on a laptop or mobile device. This is a great illustration of an example of passive detection. You were not looking for this material, but you found it, and now you must respond. A full list of incident detections is detailed in the following:
Once an incident is detected using the methods mentioned in the preceding section (passive, active, or proactive), or it's detected through dumb luck, it's time to spring into action and respond to the incident. The person responding to the incident, called the first responder, should be versed in how to collect evidence in the order of volatility. If the evidence is not collected, for example, from a computer's RAM, and the computer reboots, the evidence will be gone. The order of volatility is as follows:
The preceding evidence can be collected with the sophisticated tools that only a highly trained first responder might have on hand. However, not all the tools need to be complex tools. A simple camera can preserve information. For example, if you walk up to a system that displays a ransomware screen, your first reaction should be to take a photo of the screen. If you press a key, the ransomware could crash and disappear. Photo evidence of data and processes that are loaded in RAM is a good alternative to not having any evidence, but having a digital copy of the evidence is preferred.
The act of photographing the scene should not be limited to just the computer screen; anything relevant to the incident should be photographed as evidence. Time and date stamps should be overlaid onto the image. This is normally a function of any camera. However, if you can't digitally record the time stamps, a simple alternative is to include a watch in the frame of the photo.
You should take notes with a pad and pen, recording the initial scene, including time and date. Create a chronology of the discovery and collection of the evidence. Remember, any of this could potentially be used in a court of law. The underlying premise is to record as much evidence as possible before the crime scene is tainted by others.
The removed materials should be secured and turned over to the proper authorities. Depending on the situation, materials may be held in a safe, locked location at the office, or they may need to be turned over to local authorities. Have a documented procedure in place to follow, given a situation.
The materials that are deemed as evidence should be well documented as to why they are considered evidence. The chain of custody documentation should define the following:
If the evidence is moved, the chain of custody documentation should reflect the following:
The chain of custody must be maintained at all times. If a chain of custody of the evidence is not maintained, the evidence may not be admissible in a court of law.
Once you've collected the initial set of evidence and in the order of volatility, it's time to report the incident. The incident should be reported to management and a decision should be made whether to involve law enforcement. Involving law enforcement is of course dependent on the severity of the incident. A piece of malware that has infected a single machine is a bit different than malware that has infected an entire network. Your management might elect not to involve law enforcement at all.
Regardless of the direction management takes, the evidence collected, as well as notes taken during the collection, will help an escalation team or law enforcement to proceed in building a case. The goal of the first responder is to collect evidence that answers the following questions:
An escalation team or law enforcement's job is to fill in the blanks by using the evidence. If the evidence is complete and concise, it will be used to build a case against the threat agent. The ultimate goal is to stop a future incident from happening.
Your most important task is to recover from the incident. If your critical ordering system was affected during the incident, it's your job to get it back online. If the incident involved one computer that is used by a task worker, then it's your job to get it back up and running. You might notice a common theme here: it's your job to get things back to normal after the incident. Once you can get the flow of information flowing again, you can move on to remediating the incident.
During the recovery from the incident, you may have to make changes to the network or systems that support the clients. Any changes during the recovery process should be documented thoroughly. Documenting these changes is important if you are submitting claims to insurance, assessing damages, or looking for future reparations.
All components affected by the incident should be remediated to ensure that all traces of the incident have been removed. Remediation can be as simple as adding firewall rules to the firewall, or it can involve formatting a server and reloading it. The steps to remediation should also include steps to prevent the incident from happening in the future.
Before, during, and after the incident, the documentation process should begin. You should collect as much information as possible, as a formal incident report will eventually be formulated defining the following key elements. You will learn about the incident report in Chapter 22, “Documentation and Professionalism.”
It really doesn't matter how you collect information for documentation purposes. It can be pad and pen or something more elaborate. The only stipulation is that the documentation should not be on a system that can be affected by the incident. An offline laptop is fine, as long as the laptop is never introduced to the network affected by the ongoing incident. This could jeopardize all of the documentation efforts and hinder the outcome.
The final step for incident response is to review all the documentation and findings of the incident—a process often called a hot‐wash meeting. During a hot‐wash meeting, the incident response team should talk about what has been done properly during the incident and what procedures should be changed for future incidents. These meetings should be constructive and support standards of excellence for the incident response team.
Another key goal of the review process is to identify threats similar to the characteristics of the incident. If an employee entered credentials into a phishing page, what measures are in place to prevent this from happening to others in the organization? You may have rules in place for this particular phishing email, but are your employees trained for future incidents similar to this? If not, end‐user training may be required.
Now that you have a good understanding of the process involved with incident response, let's look at some of the best practices. The following best practices should be applied to all elements of the incident response process:
When you buy an application, you aren't actually buying the application. Instead, you're buying the right to use the application in a limited way, as prescribed by its licensing agreement. Most people don't read these licensing agreements closely, but suffice it to say, they're pretty slanted in favor of the software manufacturer.
Don't like the terms? Too bad. No negotiation is allowed. If you don't accept the end‐user license agreement (EULA), your only recourse is to return the software for a refund. (Most vendors will refuse to take back an opened box. Still, the software manufacturer is required to take it back and refund your money if you reject the licensing. This is true of programs purchased online as well.)
Although the majority of the applications that you acquire will probably be commercial products, there are a number of alternatives to commercial software sales. Here are some of the license types that you may encounter:
www.download.com
or from
the creator's personal website. Large companies like Google and
Microsoft also sometimes offer products for free, because it serves the
company's interests to have a lot of people using their software.
Examples include Google Chrome and Microsoft Internet Explorer. Freeware
doesn't include source code, and users aren't allowed to modify the
application.If you buy any sort of commercial software, you will receive a product key, which you will need to enter during installation or the first time the application is opened. The product key might be emailed to you, or it could be located on the physical media if you got an installation CD‐ROM, DVD, or thumb drive. Figure 21.12 shows an example of a product key.
FIGURE 21.12 A Microsoft product key
In a corporate environment, license management is a critical responsibility. The company may spend thousands or even millions of dollars on software licenses. Money could be wasted on unused licenses, or if the company's computers have unlicensed software, it could result in huge fines. Ignorance is not a legal excuse in this area.
To avoid these problems, it may be best for your company to purchase
a software asset management tool, such as Microsoft's Software Asset
Management guide (www.microsoft.com/en-us/download/details.aspx?id=31382
),
License Manager by License Dashboard (www.licensedashboard.com
),
or FlexNet Manager by Flexera Software (www.flexerasoftware.com
).
In general, here are the steps to take for proper license
management:
Because of the potential for heavy fines, many companies prohibit the installation of software on client computers unless specifically authorized by a manager or the IT department.
As an IT manager, you will very likely have access to information that you will need to keep closely guarded. For example, you might have access to username and/or password lists, medical or educational records, addresses and phone numbers, or employee records. It's your responsibility to ensure that sensitive information does not get released into the wrong hands. On the flip side, you may encounter information that's sensitive because it's prohibited or illegal. You need to know how to react in those situations as well.
Much of the data you come into contact with could be regulated outside of the organization's internal policies. Regulated data must be identified as it enters your network, and the proper operating procedures should be followed. The operating procedures for such data should be constructed so that you can adhere to the data's regulatory compliance rules. The regulatory rules for compliance can be at any local, state, or federal level. Many of the regulations in the following sections are at a federal level of compliance.
Personally identifiable information (PII) is anything that can be used to identify an individual person on its own or in context with other information. This includes someone's name, address, other contact information; the names of family members; and other details that people would consider private.
PII should always be kept confidential and secure. It seems like every few months or so we see news stories of data breaches at big companies resulting in stolen credit card data or username and contact lists. This information finds its way into hackers' hands and causes millions of people grief and monetary damages. Be sure that this information is properly secured and can be accessed only by authorized personnel.
Any personal information contained in a document issued by a government or state is considered personal government‐issued information. Examples of government‐issued documents are defined as a birth certificate, Social Security card, identification card, driver's license, resident card, taxpayer ID number, or password, just to name a few. The category of information is very broad and can overlap with other types of protected data, such as protected health information or other PII.
The threat to personal government‐issued information being compromised is that the information is how a person is defined by the government. The theft of a Social Security number is a direct theft of identity in the eyes of the government. Tax records to credit information is tied to a Social Security number. Therefore, like any PII, this data should also be kept confidential and secure.
Payment Card Industry Data Security Standard (PCI DSS) is a standard of processes and procedures used to handle data related to transactions using payment cards. A payment card is any card that allows the transfer of money for goods or services. Types of payment cards include credit cards, debit cards, or even store gift cards.
PCI DSS compliance is not enforced by government entities. PCI DSS compliance is actually enforced by banks and creditors. Merchants must comply with the PCI DSS standard to maintain payment card services. If a merchant does not comply with PCI DSS standards and a breach occurs, the merchant can be fined by the banks. Once a breach of PCI data occurs, then local, state, and federal laws can apply to the merchant. For example, some laws require the merchant to pay for credit‐monitoring services for victims after a breach.
The General Data Protection Regulation (GDPR) is a European Union (EU) law governing how consumer data can be used and protected. The GDPR was created primarily to protect citizens of the European Union. It applies to anyone involved in the processing of data based on the citizens of the European Union, regardless of where the organization is located.
The GDPR recommends that organizations hire a data protection officer (DPO). This person is the point of contact for all compliance with GDPR, as well as any other compliances your organization falls under. The underlying goal is to achieve consent from the end user of your product or service. Consent to collect information must be proven by an organization beyond a shadow of doubt. This means that if someone visits your website from the European Union, you must receive consent in clear language to even place a cookie in their web browser. The DPO is responsible for coordinating this language, as well as the life cycle of any data that is collected.
Protected health information (PHI), also known as personal health information, refers to any information used in the health care industry to describe a patient or ailment. This information can be considered “the patient chart” you always see on television. However, electronic health records (EHR) go way beyond the current condition of a patient; they describe a person from the cradle to the grave.
Electronic health records are used to record a patient's vitals every time the patient visits a doctor's office. They represent historical information about patients, as well as billing information used by health care providers. This makes the EHR extremely valuable to a hacker and represents a large makeup of identity theft.
This type of identity theft is really dangerous! Your diagnosis could be determined based upon vitals, allergies, or conditions that are recorded from a person who assumes your identity. Try to explain to the insurance company that your gallbladder needs to be removed, but their records show they paid to have it taken out already.
PHI can also be used to track the statistics of larger groups of people. However, the data must be anonymized first, before it is put into a publicly addressable database for these statistics and studies. Information such as names, Social Security numbers, phone numbers, health insurance information, account numbers, and specific geographical information must be removed. Interesting enough, geographical information can include only the first three digits of a zip code.
As you learned, there are all different forms of data and they are regulated for sensitivity and confidentiality purposes. The storage of data is also regulated depending on the type of data. The data to be retained is usually transactional in nature, such as a financial transaction, but it can also be data that has touched your system.
Regulations are not the only governing factor for data retention—your company can also have an internal requirement. An example of an internal retention requirement might be for memorandums, access control records, or even video footage. There are different reasons for adhering to an internal data retention requirement, but they all shield the organization from liability in some way. This is very apparent in legal situations. If you keep all data indefinitely on old backups and there is a lawsuit, you will be liable to restore records in a timely fashion, even if those old systems are no longer supported. If you don't produce the data in a timely fashion, you could default a lawsuit in favor of the plaintiff. These discoveries of information are considered e‐discovery by court proceedings.
As an IT professional, it is your responsibility to work with legal counsel in your organization to define data retention periods. However, before you can define the data retention periods, you will need to identify the data to be protected. A document profile should be created that clearly defines the types of data you deal with on a daily basis. This stage of the process is where you can also identify outside governing requirements for retention. Once the document profile is created, you should start tagging data in your environment with the data types. This may be as simple as naming the email backup job email data. You should then set up a hard policy to control how long the data is kept. Keep in mind that holding the data too long is just as bad as not holding the data long enough.
This chapter covered three areas of operational procedures that you should integrate into your work:
First, we looked at the importance of safety procedures. Safety is about protecting you from harm as well as protecting your computer components from getting damaged. Then, we outlined some methods to apply the policies and procedures for a safe working environment, and identified potential safety hazards. Included were preventing electrostatic discharge (ESD) and electromagnetic interference (EMI), creating a safe work environment, and properly handling computer equipment.
Safety involves you and your coworkers, but it also includes environmental issues. The environment can have a harmful effect on computers, but computers can also greatly harm the environment. You need to be familiar with the importance of material safety data sheets (MSDSs) as well as the proper disposal procedures for batteries, display devices, and chemical solvents and cans. These items need to be kept out of the environment because of the damage that they can cause.
Finally, we looked at potential legal issues. Failure to follow certain procedures can expose you or your company to legal proceedings. Make sure that all the software on your computers is legal and licensed and that the computers contain no illegal or prohibited materials. You may also need to protect personally identifiable information, depending on the type of data you have. When incidents happen, you need to know how to respond properly to mitigate the issue.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance‐based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
One of your office coworkers recently tripped on a power cord and injured himself. What should you do to find potential trip hazards in your office? Once the hazards are identified, what actions should you take?
THE FOLLOWING COMPTIA A+ 220‐1102 EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:
Every day at work, we do what it takes to get the job done. As IT professionals, we have millions of facts crammed into our heads about how various hardware components work and what software configuration settings work best for our systems. We know which servers or workstations give us constant trouble and which end users need more help than others. We are counted on to be experts in knowing what to do to keep computers and networks running smoothly. And even though we don't tend to think about it overtly, we need to be experts in how we get things done as well. Even though the how might not be at the top of your mind every day (hopefully you don't go to work every day thinking “Okay, don't get killed by a monitor” or “Let's see if I can be nice to someone today”), but it should be integrated into your work processes. Operational procedures define the how, and they provide guidance on the proper ways to get various tasks accomplished.
In this chapter, we will start by talking about documentation used to document the network and the policies that you will encounter and need to enforce. We will then look at change management practices used to make changes to the network that will affect other business units in your organization.
We will also look at disaster prevention and recovery. In the process, we will cover topics such as backup and recovery, power resiliency, and cloud storage versus local storage.
In the final part of this chapter, we'll switch to discussing professionalism and communication and focus on topics that you need to know for your exam study. Applying the skills learned here will help you pass the exam, but on a more practical level, it will help you become a better technician and possibly further advance your career.
Creating documentation of the network and systems you work on is the last step to many projects. However, it is the most overlooked, or it is considered secondary to fixing new problems or upgrading to the next system. The documentation produced has positive effects on production and problem solving. It is used so that you don't need to redo the step of discovery the next time you have a problem. Documentation also allows you to have others work on a problem with the same distinctive view you had at the time of documentation.
Some of the documentation you create will help to create policies and procedures that others will need to follow. Throughout this book, we have discussed the hard controls of these policies. For example, when you implement a password policy, you can dictate that a password be complex and of a certain length. A written policy is a soft control that might detail how to create a complex password. In this chapter, we will look at several different policies that you will come across as a technician.
One of the most important functions of the information technology (IT) department is to solve problems and fulfill end‐user requests. The core of this function is the ability to receive incoming requests, track progress, and assure that problems and requests are solved and completed. It is common to find a ticketing system at the heart of this function called help desk services or support services. A ticketing system adds other benefits to the function of the help desk, such as accountability, reporting, collaboration, and escalation, just to name some of the important benefits.
There are hundreds of ticketing system vendors on the market today, and each one has some unique features. When choosing a ticketing system, the first decision you must make is to host it on‐premises, also known as on‐prem, or host it through a cloud option from the vendor. Many ticketing systems can be purchased as a software‐as‐a‐service (SaaS) cloud model. You are charged a monthly fee based on the number of tickets, storage, or the number of agents (help desk personnel), whereas on‐prem systems are licensed either per agent or per supported users and storage is not a concern. There are a number of considerations when choosing between on‐prem and cloud based. The SaaS option is a great option for a number of reasons, mainly for availability. However, if all of your employees are local to your site, then on‐prem might be the best solution. If you are conscientious about keeping the system up to date for security patches, having an on‐prem system is another box to patch (so to speak). But if you elect SaaS hosting, the vendor is responsible for patching and keeping the system secure.
Besides the initial hosting options, there are several features that are somewhat standard with any ticketing system. These features include escalation, automated routing, knowledge base, email to open a ticket, ticket management, and real‐time reporting, and those are just a few of the features. We will cover many of them in this section.
Once you have a ticketing system in place, it's just a matter of having tickets entered into the system. Your users should have no problem finding network issues to create tickets for. There are three main entry methods of generating a ticket: email, portal, and manually.
Email is probably the easiest way to enter a ticket. It is customary when setting up a ticket system to dedicate an email address for entering tickets, such as helpdesk@wiley.com or support@wiley.com. When a user has a problem, they simply have to email the ticketing system and a ticket is automatically generated. Their email address will become the identity for the ticket related to the problem, and IT can converse with them via email. This method of ticket entry is commonly used for external customers, such as product support.
Every ticketing system will have a portal for the users and help desk personnel to log into. This of course means that the users and the help desk personnel need an existing account, or users can sign up for an account when they enter the ticket. These types of ticketing systems are often established for internal issues inside the organization. Once signed into the portal, a user has several fields to fill out in order to submit the ticket. This type of setup is nice for users if they have multiple tickets and want to track them simultaneously.
The manual entry of tickets is a catch‐all entry method. A help desk support person is responsible for entering the ticket information manually to create a ticket. This is a common practice when someone calls into the help desk. The entry of the ticket serves two main purposes: The first is that it allows for follow‐up or escalation of the problem. The second purpose is that entering a ticket can identify a problem common to your network—for example, if a system has just been upgraded and calls come into the help desk because of a problem with the upgraded system. The user or even your help desk personnel might not know of the recent upgrade. However, the accumulation of tickets for similar issues will allow help desk personnel to identify a systemwide problem. The problem(s) can then be escalated to the party responsible for the problematic upgraded system. There are many other reasons for manually entering a ticket, such as scheduling of help desk staff, identifying specific skills required by help desk staff, and the overall volume of work. These are just a few; each organization has its own reasons.
These are the most common ticket entry methods for ticketing systems. There are many other methods, such as application‐triggered entry and interactive voice response (IVR) automation.
The devil is in the details, as the saying goes, and that is also true when entering a ticket into the ticketing system. There are several elements that need to be entered correctly for the successful resolution of a problem. If a user is self‐entering the ticket, then you will have no control over this initial process.
When a user enters a ticket using the portal, it is common for them to be very brief with their problem description. This is usually because the description window is one field of many and the user may feel they have to be brief. Problems that are submitted via email might have more elaborate explanations, because users may feel more comfortable communicating in email. However, other fields may not be filled out, such as severity, contact information, or problem category. The ticketing system generally cannot decipher these elements from a simple email.
When acting as a help desk support person answering the help desk line, we have the most control over manually entering a ticket into the ticketing system. We also directly interface with the user. So, there should be a standardized process for collecting the necessary information, as well as some best practices to be followed. Always keep in mind that your interactions with the user make an impression on the entire department. You should have good interpersonal skills, display empathy, and above all have patience. All of these skills will help both you and the user reach a resolution more quickly.
Another important skill that you should display is actively listening to the user. This skill is often related to the skill that detectives have when interviewing a person. When a user calls into the help desk, the critical information to collect is who they are, how to contact them, and a description of the problem. In addition, you should ask the urgency of the issue.
If you are able to enter information into the ticket entry form as the person is talking to you, do so. If you are unable to type and listen, or you must allow for long awkward pauses as you type, you should use the trusted method of pad and paper. You can always enter the information after the person is off the phone and moves on to their next task, awaiting a response for the problem. Or, if you solved the problem, then you can enter the ticket with the resolution after they are off the phone to act as a follow‐up. Always exercise speed and accuracy in obtaining the information, especially if you are not able to help the person and need to escalate the ticket to someone else.
The following is a list of information you should obtain on the initial call with the user as you exercise all of the best practices:
Every IT department has a structure, and they vary from organization to organization. On a very high level there are typically two main groups of support personnel: network administrators and application/database administrators. However, your organization might have security administrators, application developers, storage administrators, virtualization administrators—and these just scratch the surface.
In each group of support personnel, there are varied levels of experience, support, and responsibilities. The simplest structure is front‐office personnel, who interface with the users, and back‐office personnel, who interface with the front‐office personnel, and make systemwide changes. Depending on the size of your organization, you may also have an intermediate level of personnel who interface with the back‐end and front‐end personnel. This intermediate level serves as a buffer to keep engineers separated from the day‐to‐day problems. These levels are often numbered from basic knowledge to expert knowledge, level 1 through level 3.
Regardless of your structure, these various levels are considered escalation points. As an IT technician or even an IT administrator, you are not always expected to have the answer, but you are expected to be able to get the answer. When you don't have the answer and must ask someone more knowledgeable, this process is called escalation. With a ticketing system, if all the information is properly obtained, you can escalate the ticket to a higher level of technician or administrator. The next level up the support chain should be able to read through the ticket notes and work on a resolution. When a ticket is escalated to another person, it is common for that person to own the ticket (problem) and be responsible for communicating with the user.
You may also find that you are the most knowledgeable person about a particular system in your organization, yet you still do not have an answer. This is fine and it happens all the time, but always keep in mind you that are expected to be able to find the answer. This requires an escalation of the problem to a third party, potentially outside your organization. It is also the reason support contracts should be kept current so that you have an escalation point outside of your organization. When an escalation is made outside of your organization, the point of contact (POC) inside your organization will be the owner of the ticket.
The requirement and benefits of clear communications cannot be overstated enough—not just verbal communications, but also written communications between technicians and users, as well as technicians and their escalation points. Clear, concise written communications can also break down verbal communications problems between regions of the world. There are three stages to any problem where clear, concise written communications are required:
Although the problem resolution should be where the communication with the user ceases, it is important to follow up with the user to make sure the problem is resolved. This important step should be done right before the ticket is closed. You may learn that the final resolution presented does not work for the user and that they continue to use the work‐around that you presented as a temporary fix. This often happens when you must escalate the problem and communications are not clear and concise between the next technician and the user.
The ticket management process should include the steps of entry, resolution, solution, follow‐up, and knowledge base. Each ticket that is completed strengthens your technical support for future problems. It also strengthens the faith the users have that the IT department can solve their problems. When you follow up with the user, you verify that all of your efforts and your escalation point's efforts are justified by the solution.
Many ticket systems allow for the entry of a knowledge base article for future users and technicians to self‐service their problems. Writing the draft of a knowledge base article should happen after you have identified a successful follow‐up with the user.
Asset management is an important part of the IT department's responsibilities, because the IT assets are considered fixed tangible assets. Some other examples of fixed tangible assets are land, furniture, and office equipment. When equipment is initially purchased, the accounting department records it as an asset on the company's general ledger, because it adds to the value of the company. Over time, however, the asset will lose its initial value. The accounting department will depreciate the value of the asset based on its perceived lifespan.
The management of these assets benefits the organization in defining the organization's worth. The management of assets also helps the IT department in forecasting upgrades and future expenditures for growth. In this section we will cover the various elements of asset management as it applies to the IT department.
There are a number of ways to manage assets for the organization. Choosing a way to manage assets depends on what needs to be done with the information. Asset management at an organization‐wide level is often a module of an accounting package used by the company. This software allows an asset (equipment) to be tracked by associating a number on the asset tag with the condition, business unit, and perceived value of the equipment. Examples of this equipment are desks, land, and even computer equipment. These types of databases work well for reporting on the value of equipment that the organization owns to calculate a net worth for an organization, but they do very little in helping an IT department plan upgrades.
Laptops, desktops, and other devices have variables such as storage, RAM, operating system versions, and other unique variables to the hardware and software of the device. Asset management systems are databases that collect data from the operating system through the use of an agent. This type of asset management is more detailed than a purchasing record from the accounting department. Once the information is collected, reports can be drawn when upgrades are required. For example, a report you may compile in the asset management system might be all operating systems that match Windows 10 and that have less than 4 GBs of RAM and hard drives smaller than 100 GBs. You then have a report of what needs to be upgraded in terms of hardware for an upgrade of the operating system to Windows 11. When using an asset management for an organization that spans a large geographic area, this is invaluable information that otherwise would have taken days to collect.
Asset management systems don't stop at hardware; software packages and their accompanying licensing are considered assets as well. Many asset management systems can also collect a list of the software installed on the devices in your organization. They can also include detailed licensing usage information so that you can gauge where licensing is being used efficiently and where it is not based on usage.
Not all asset management requires databases and asset management systems. When managing a small amount of equipment, an inventory list is more than sufficient. The list can be a simple Microsoft Excel sheet detailing the types of equipment and their associated quantities. These inventory lists work really well when trying to control consumable electronics like mice, keyboards, and monitors. Once the rotating stock of equipment becomes too large in quantity and value, it's time to look at an asset management system.
All computer and network equipment should be tracked, from the cradle to the grave, by the IT department. When equipment enters the company, it should be labeled with an asset tag, as shown in Figure 22.1. The asset tag is often a permanent metallic sticker or metallic plate that is riveted to the equipment. The asset tag often has a barcode, which defines the numeric number that identifies the asset. This asset tag should then be entered into the asset management software, by either typing in the number or scanning the bar code.
FIGURE 22.1 An asset tag
The entire life cycle for any IT system is cyclical and differs slightly depending on the assets. The typical life cycle for IT systems consists of purchasing, deploying, managing, and retiring. The exact procedures for the IT life cycle will depend on your organization and the goods or services.
The first step is the procurement of the goods or services. This process is typically standardized by your organization's business affairs department or finance department. Just like any process, the procurement process differs from organization to organization. An example of a procurement life cycle is shown in Figure 22.2; yours may differ slightly. Regardless, the procurement life cycle will always start by identifying the need for the goods or services being requested. If it costs money for the organization, then it has to fill a need or solve a problem. This is probably the most important step in the procurement process. If it does not solve an apparent problem or need, then it might not get approved or may be sidelined for another budget cycle.
After the need is identified, it's time to obtain quotes for the goods or services. You should have three comparative quotes for the goods or services being requested. However, depending on the goods or services, achieving that may not be feasible. Examples where comparable quotes are unobtainable are direct purchasing from the vendor, vendor‐registered value‐added resellers (VARs), and custom goods or services.
The budget approval process will be dependent on the reported needs of the goods or services, as previously explained. The goods or services are submitted for approval for the operational budget (OpeX) or the capital budget (CapX). Items being submitted for the CapX budget will depend on their value and utility. The test is always, can the item be depreciated over the expected life of the product? Examples are servers, workstations, and other equipment. Anything that cannot be depreciated, such as services, will fall into the OpeX budget. It is likely that your organization has a standard characterization of goods and services and which budget is applicable. The outcome of the budget approval process will either be approved or denied, but it can also be conditionally approved based on meeting goals or other conditions.
FIGURE 22.2 An example of a procurement life cycle
Once the goods or services are approved, then your business affairs department will work with the vendor or reseller to negotiate money, terms and conditions, and the overall contract/scope of work (SOW). Once the purchase is completed, you will receive the goods or services. At this point, the contract or SOW is important, because it will define when the vendor is to be paid. If all the goods are not received or the services are not complete, then the vendor is not entitled to send an invoice for payment or the invoice can be held. Although this sounds like a simple part of the procurement process, it is often overlooked. You should never begin payment until the goods are received or the contract/SOW is satisfied.
During the initial phase of obtaining quotes, you should identify the cost for ongoing support, maintenance, or licensing of the goods or services. These costs should be submitted to the operational expense budget as an ongoing/recurring cost, since these costs are usually a monthly or annual cost. The vendor or VAR might also include 3–5 years of support, maintenance, or licensing in the quote so that it can be submitted to a capital budget. This will be based on your organization's processes. Typically, the vendor will include an initial warranty or license with the original purchase of the product or service.
Every product or service outlives its usefulness. This is where we identify retiring or upgrading the product or service. This retirement or upgrade will then start the procurement process all over again. This time around, identifying the needs is easier, unless the retirement does not necessitate replacing the product.
When assets are acquired by the organization, they must be managed throughout their life cycle. This typically requires assigning a person to manage the group of assets, such as laptops, servers, and hotspots. This is a critical step in the management of the asset. The person who manages the assets is responsible for identifying users who are assigned to the devices in the event of termination. The responsible person is also required to forecast upgrades and perform accounting for all assets over their life cycles. Each organization has its own requirements, but these are the top requirements for asset management.
Documentation is extremely important to an IT department, not to mention the entire organization. It serves many different purposes, such as educating new IT workers, recording work performed, highlighting problems, and describing normal functionality. However, documentation is usually one of the functions that suffer the most when projects are hurried and there is a push to start the next project.
In the following sections, we will cover the most common documents that help support the IT department and day‐to‐day operations. Some of these documents are prepared by collecting information in a specific manner and then detailing the results; examples are site surveys and baseline configurations. However, many of these documents simply detail how the network is connected and how it functions. The documentation process of the existing network components is usually the best way to learn the network.
An acceptable use policy (AUP) is an internal policy used to protect an organization's resources from employee abuse. Employees use a number of resources to conduct an organization's business. Email is one such example. It is generally not acceptable for an employee to use an organization's email for religious, political, or personal causes; illegal activities; or commercial use outside of the organization's interest.
An organization's legal counsel, human resources department, and IT department are responsible for developing the AUP. The systems that should be included in the AUP are telephone, Internet, email, and subscription services that the organization retains. The AUP might not be exclusive to electronic resources; the organization might also include postage and other nonelectronic resources that could be abused.
Documentation should be the last step of the work you perform—we can't stress that enough. When you use documentation in respect to network troubleshooting, the documentation allows you to understand a problem, and documentation is created as you collect information. You gain an understanding of the problem by summarizing what you've learned about a problem into a drawing on a page. This, in turn, allows you to understand how something works and why it works. This type of documentation is called a scratch diagram. It is not formal documentation; it's just scratched out with a pen and paper, as shown in Figure 22.3.
FIGURE 22.3 Scratch documentation
Although a scratch diagram is great for diagnostics, it's not meant to be the final formal documentation of a project or system. A finish diagram should be created in a program such as Microsoft Visio or SmartDraw. These are just a few examples of programs used for network documentation; many others are available.
Regardless of which you program you choose, you should create all documentation in the program, and all your staff should have access for modifications. Figure 22.4 shows an example of finished diagram that you might produce from the scratch diagram in Figure 22.3. This documentation is much more refined and would most likely be your final documentation at the end of a project, problem, or implementation of a network system.
FIGURE 22.4 Finish diagram
There are some common symbols that you can use when creating either a scratch diagram or a finished diagram. The symbols shown in Figure 22.5 are universally recognized by network professionals. Although you can adapt your own symbols for variation, they should remain similar to those shown here so that someone does not have to ask you what something represents.
FIGURE 22.5 Common networking symbols
Logical diagrams are useful for diagnostic purposes and for creating high‐level documentation. They allow you to see how a network works and represent the logical flow of information. In the logical diagram shown in Figure 22.6, you can see that Client 1 can communicate directly with the other computers on the same network segment. However, if Client 1 wants to communicate with Client 3, it must communicate through the router.
FIGURE 22.6 A logical network diagram
Physical diagrams are also useful for diagnostic purposes and for creating precise documentation. Physical diagrams define a network's physical connections. The physical documentation details why a network works by showing exactly how the information will flow. For example, in the physical diagram shown in Figure 22.7, you can see exactly how Client 1 is connected to the network and the devices it will traverse when it communicates with Client 3.
FIGURE 22.7 A physical network diagram
You may wonder why it's necessary to follow a certain procedure. The answer sometimes is that the procedure is an outcome of a law, otherwise known as a regulation. Laws are created at the federal, state, and local levels. The laws that are externally controlled and imposed on an organization are called regulations. The following are various regulations you may encounter while working in IT:
Your organization must comply with these regulations, or you could risk fines or, in some cases, even jail time. Your organization can comply with regulations by creating internal policies. These policies have a major influence on processes and, ultimately, procedures that your business unit in the organization will need to follow, as shown in Figure 22.8. So, to answer the question of why you need to follow a procedure, it's often the result of regulations imposed on your organization.
FIGURE 22.8 Regulations, compliance, and policies
The overall execution of policies, processes, and procedures when driven by regulations is known as compliance. Ensuring compliance to regulations is often the responsibility of the compliance officer in the organization. This person is responsible for reading the regulations (laws) and interpreting how they affect the organization and business units in the organization. The compliance officer works with the business unit directors to create a policy to internally enforce these regulations so that an organization is compliant. An audit process is often created so that adherence to the policy can be reported on for compliance.
Once the policy is created, the process can then be defined or modified. A process consists of numerous procedures or direct instructions for employees to follow. Figure 22.9 shows a typical policy for disposing of hazardous waste.
FIGURE 22.9 Policy for disposing of hazardous waste
The process of decommissioning network equipment might be one of the processes affected by the policy. Procedures are steps within a process, and these, too, are affected (indirectly) by the policy. As the example shows, a regulation might have been created that affects the handling of hazardous waste. To ensure compliance, a hazardous waste policy was created. The process of decommissioning equipment was affected by the policy. As a result, the procedures (steps) to decommission equipment were affected as well.
A common documentation method that is widely accepted is the use of splash screens or screen captures to detail a problem, the solution to a problem, or the installation of software. It's a very efficient method because you can quickly illustrate a problem, solution, or installation with simple screen captures. The Windows operating system has a built‐in tool called Steps Recorder to assist with obtaining screen captures, as shown in Figure 22.10.
The software will capture mouse clicks and save screen captures, along with some context for what has been clicked. At the end of the screen capture you can review the screens captured, view them as a slide show, and view additional information. You can save everything to a ZIP file containing an MHT (MIME HTML) file that contains all the screen captures in a single file.
Steps Recorder is not the only tool that can be used to capture splash screens. Several third party‐applications are available. Each of these has different features that makes it unique. A popular third‐party application is Camtasia, which allows the capture of live video screen recording and provides a video editor.
An incident is any event that is unusual or outside of the normal processes. You may encounter many different types of incidents as a technician: network security incidents, network outage incidents, and even customer service incidents. Regardless of which type of incident transpires, an incident document should be completed so that there is a record of the event. A record of the incident allows for further review after the incident has subsided so that it is not repeated.
FIGURE 22.10 Windows Steps Recorder
The incident document should be completed as soon as possible so that key details are not forgotten. This document is often used as an executive brief for key stakeholders in the company, such as C‐level people—for example, the chief information officer (CIO). The incident document can also be public‐facing and used to inform customers of the incident. When used in this fashion, the incident document allows the organization to communicate with transparency about a major incident they allow may have experienced. Chapter 21, “Safety and Environmental Concerns,” covered the processes and procedures for incident response in further detail. Here are common elements of a network incident document:
Although these are the most common elements of an incident document, the document is not limited to these elements. Each organization has different needs for the process of reviewing network incidents. A template should be created so that there is consistency in the reporting of incidents.
When organizations create policies, they will outline specific processes to adhere to the policies. Throughout this discussion you may see the words policy and plan. Policies are also considered plans for the organization; once a plan is ratified, it becomes a policy. Put another way, a policy is a mature and enforced plan of action. Each process derived from the plan or policy contains a list of steps that are called standard operating procedures (SOPs), as shown in Figure 22.11. All of these components are part of the documentation process for a quality management system (QMS). QMSs are created to meet certain International Organization for Standardization (ISO) requirements. A common ISO certification is ISO 9001. When a company is ISO certified, it means that it adheres to strict quality standards for consistent outcomes and has a detailed operation plan. There are many information technology ISO standards that your organization can be certified with. You have probably seen these certifications in a sales manual at some point.
FIGURE 22.11 Standard operating procedures
A process in a QMS is just that—a process. A process is defined as taking an input and creating an output. Here's an example: A specific server needs to be decommissioned, which is the input to the process of decommissioning a server, and the output of the process is that the server is wiped clean of corporate information. The process in this example is the decommissioning of a server. (We are oversimplifying the process in this example.) Most processes have an input specification and an output specification.
The SOP in this example outlines how to perform the process of decommissioning the server. The SOP should clearly explain who is responsible and the standard they must achieve for the process. In the example of a decommissioned hard drive, the SOP would define the following:
Several tasks are created from the SOP document. Each task is defined as part of the procedure to achieve the decommissioning of the server. The work instructions serve two primary purposes. The first is to detail how each task should be performed. The exact steps are listed in the work instructions, and for the previous example, they may include the following:
The second purpose of the work instructions is to provide a training process for new employees. The work instructions are a more detailed portion of the procedure, so it becomes a training item for new employees on how to perform their job.
If something is worth doing the first time, then it's worth documenting and chances are it will need to be done again. This is often the case when installing custom software packages. When a new software package is purchased, a technician is often assigned to assist and install the application for the users. It is that technician's responsibility to document the software installation process. This ensures that other technicians will not have to start from scratch to help the next user who needs the software package installed.
Documenting the software installation will save time for consecutive installations. It also allows the initial technician to pass the knowledge on to other teammates. This is obviously important for the rest of the IT team, and it's also important for the initial technician. It is often the case that the original technician will be assigned the consecutive installation, mainly because they are the only ones who know how to install the software. Therefore, documenting the process of installing the software serves two purposes of saving time and passing information on to other teammates.
As employees are hired in your organization, a certain amount of initial interaction with IT is required. This interaction is called the onboarding procedure and is often coordinated with the HR department in your organization. During the onboarding procedure a new‐user setup checklist should be followed. Examples of the items on the checklist include showing the user how to log in the first time and changing their password. The password policy is often the first policy discussed with the user. Other policies such as bring your own device (BYOD), acceptable use policies (AUPs), and information assurance should also be discussed during the onboarding procedure. Email, file storage, and policies should be covered as well. Each organization has a different set of criteria that make up the onboarding procedures.
Eventually, employees will leave your organization. The offboarding procedure ensures that information access is terminated when the user is terminated. A user termination checklist should be followed during the offboarding procedure. The process will be initiated by the HR department and should be immediately performed by the IT department. This process can be automated by using the organization's employee management system. The procedure can also be manually performed if the employee management system is not automated. However, the procedure must be performed promptly, since access to the company's information systems is the responsibility of the IT department. During the offboarding procedure, email access or BYOD access is removed through the use of the mobile device management (MDM) software; the user account is disabled; and IT should make sure the user is not connected to the IT systems remotely. The offboarding procedure may also specify that a supervisor must assume ownership of the terminated employee's voicemail, email, and files.
If time is spent on a problem, it's worth documenting so that the same amount of time is not required by someone else or yourself in the future. A knowledge base is a collection of problems with solutions that both your internal customers (IT staff) and external customers (end users) can use to solve common problems. The Microsoft Knowledge Base is a great example of a knowledge base. It contains more than 200,000 public articles and just as many that are private and accessible only by partners. Most helpdesk software allows for the creation of a knowledge base article from the resolution of a problem. A knowledge base can become very large, so many knowledge bases allow keyword searches.
Fortunately, you don't need fancy helpdesk software to create a knowledge base. You can simply have a collection of articles that are accessible to either your colleagues or your end users. A knowledge base article should be clear and easy to understand. Make sure that you define any terms or jargon used within the article so that you don't lose the audience it is intended for. Here are common elements of a typical knowledge base article:
Although these are the most common elements for a knowledge base article, you are not constricted to only these elements. Whichever format you end up using, it is important to be consistent. A template should be drafted so that there is consistency. Consistency allows either the technician or the end user to expect the elements when they review the article.
When you implement a new system or change an existing system, you affect a lot of people. You also affect business processes and other business units with these changes. Don't underestimate the power of the documentation you produce as a technician. It can and often will be used by change management groups to review the impact of your proposed changes.
Change management is a process often found in large corporations, publicly held corporations, and industries such as financial services that have regulatory requirements. However, change management is not exclusive to these types of organizations. The main purpose of change management is to standardize the methods and procedures used to handle changes in the company. These changes can be soft changes of personnel or processes, or hard changes of network services and systems.
When changes are proposed to a process or set of processes, a document is drafted called the change management plan document. This document is used throughout the change management process to evaluate the impact to the business continuity of the organization. In the following section, we will discuss the elements of a change management plan document.
The documented business process is incorporated into the change management plan document. It provides an overview of the business process that the changes are expected to affect. It allows everyone involved in the process both directly and indirectly to understand the entire process.
The documentation specifically defines who interacts with, how they interact with, why they interact with, and when they interact with the process. For example, if your company created widgets, your documentation might detail the process of manufacturing the widget. The document would describe the following:
The rollback plan, also called the backout plan, describes the steps to roll back from a failed primary plan. If it were determined that the primary plan could not be completed, you would either implement an alternate (secondary) plan or a rollback plan, depending on the changes proposed in the change management plan document. Like the primary and alternate plans, the rollback plan should contain the steps to be taken in the event the rollback plan must be executed. The rollback plan should also document any changes to configuration so that it can be reverted back. Most of the rollback plan will consist of the original configuration, with any additional steps to revert it back.
Sandbox testing is extremely useful when you want to test a change before placing it into production. A sandbox can be constructed to match your environment; you can then implement the change and fine‐tune your primary plan. The use of a sandbox testing environment allows you to hone your process for the proposed change while observing any potential issues.
The introduction of virtual machines makes it very easy to set up a sandbox for testing. You can clone production servers into an isolated network and then create snapshots on the server in the sandbox and test over and over again, until all the bugs are worked out of the primary plan.
Every process in the organization must have a person who is assigned to be the responsible staff member. This person oversees the process and can answer questions about the process. If there are any changes to the process or changes that can affect the process, this person acts as the main point of contact. They can then facilitate any changes to the process.
As an example, you may assign a person to be the responsible party for the electronics decommissioning process. Any questions about disposal of electronics should be directed to this person. If your organization is choosing a new e‐waste company, it will affect the decommissioning process. Therefore, this person should be included in the decision as a stakeholder. Any changes can then be adjusted or integrated into the decommissioning process, and this person can facilitate the changes.
The change management process often begins with a request form that details the proposed change. The exact elements in the request form will differ slightly depending on your organization's requirements. The following lists the most common elements found on the change management request form. Some of the information found on the request form is preliminary; the information will be expanded upon as the request form transitions into the change control document.
The purpose of change is the reason the change management process starts. Either your business unit requires a change that will affect others, or another business unit requires a change that can affect your business unit indirectly. A change to any part of the process, such as the intake of raw materials, could affect the end result. Change is an essential component of a business, and it should be expected. If your company only created widgets and never evolved, you would eventually be out of business.
Unfortunately, not all changes support the company's product line directly. Some changes are imposed on the company, because IT systems are constantly changing. As a technician, you are constantly upgrading and patching systems and equipment. These upgrades and patches are considered changes that can affect the entire business process.
This section of the change management plan document should explain why the change is necessary. It needs to include any vendor documentation explaining the change to the product. For example, if the proposed change were to install a Microsoft Windows security patch, the purpose of the change would be the security of the Windows operating system. The vendor documentation in this example would be the knowledge base article that normally accompanies Windows security patches. Other examples of purposes of change might be legal, marketing, performance, capacity, a software bug, or a process problem that requires a change.
The scope of change details how many systems the proposed change will affect. The scope could involve only one system, or it could be all the systems in an entire enterprise. The scope of change is not limited to the number of systems that will be changed. The scope can also describe how many people a proposed change will affect. For example, if you propose to change lines of code in an ordering system, the change could affect your salespeople, customers, and delivery of the products. The scope of this change could impact the business continuity directly if something goes wrong during the change.
When creating this section of the change management plan documentation, be sure to document which systems the proposed change will affect, the number of systems the proposed change will affect, the number of people the proposed change will affect, as well as whether anyone will be directly or indirectly affected. In addition, you should include the proposed date and time of the change and how long the change will take. Keep in mind that this section allows the change management team to evaluate how big the proposed change is. The scope should answer the following questions:
Whenever a change is made to a system or equipment, there is the potential for the system or equipment to fail. The change could even cause another system or piece of equipment to fail. In some circumstances, the change might be successful but inadvertently cause problems elsewhere in the business process. For example, if a change to an ordering system causes confusion in the ordering process, sales might be inadvertently lost.
Risk analysis is the process of analyzing the proposed changes for the possibility of failure or undesirable consequences. Although you will include this section in the initial change management plan document, your sole risk analysis will be narrow in perspective, because you will focus on the process from the IT aspect. A change advisory board will perform a much larger risk analysis. This team will have a much larger perspective, since they come from various business units in the organization. From this analysis, a proper risk level to the organization can be determined. The risk level will dictate how much time is spent on the possibility of failure or undesirable consequences from the change.
The plan for change section of the change management plan document explains how the proposed change will be executed. Steps should be detailed on the changes and the order of the changes. If changes were to be made in configuration files, switches, or routers, you would document the changes to the configuration and why each part of the configuration is being changed. Firmware changes would list the version being upgraded from and the version being upgraded to. The idea is to provide as much detail as possible about the documented changes to be made to the systems or equipment.
When a change is implemented or planned, there is always the potential for problems, or you may identify a consideration in the execution of the plan. The plan for change section should detail those considerations. It's common for a primary plan to be drafted as well as an alternate plan in the event the primary plan cannot be executed. For example, if the primary plan is to move a server from one rack to another so that it can be connected to a particular switch, the alternate plan could be to leave it in the rack and use longer cables. Be sure to have multiple plans; once the change is approved, the plan(s) outlined in this document must be executed closely.
You should also document why the primary plan will succeed. The changes should be tested in a lab environment closest to the production environment (if possible) and documented in this section as well. When creating the plan, you should outline specific, objective goals, along with the metrics with which they can be measured. For example, if you are planning to make a change because there is a high error rate on an interface, then the metric measure to be compared would be the error rate on the interface. You would document what you expect the error rate to be after the change is made so that you can measure the success of the change.
The change board, also known as the change advisory board, is the body of users who will ultimately evaluate and then approve or deny the change you propose. This group of people often meets weekly to discuss the changes detailed in the change management plan documents. The goal of the change advisory board is to evaluate the proposed changes in order to reduce the impact on day‐to‐day operations of the organization.
It is common practice for the meetings of the change advisory board to be held via a conference call at a set time every week. This allows key stakeholders in an organization to be available regardless of where they are in the world. Because it's at a set time every week, there are no excuses for not being available during the change control meetings.
It is likely that if you are the technician proposing the change, you will be on the call for questions or clarification. The key to getting a change approved is to know your audience and communicate clearly in the change control plan document. Remember, the change advisory board is often composed of various stakeholders from the entire organization, not just IT. You should not assume that the change you are proposing is as clear to them as it is to you. Some change advisory boards are made up strictly of IT stakeholders, so you must understand who will review the proposed changes and choose your wording appropriately.
The change management document must be approved by the majority of change advisory board members or by specific board members. The approval of the proposed change should be documented in the change control policy for the organization. Only approved changes can be executed. If other changes need to be made outside of the original submission, additional approvals must be acquired.
Although the CompTIA A+ exam does not focus on application development testing and approval, user acceptance is an objective on the exam as it pertains to the change management process. It should be noted that user acceptance is not solely used for application development; it is also used when there is a significant update to an interface or a process, such as a service pack or upgrade to an operating system.
When a change is to be made in which the user's interaction will be impacted, it is common practice to beta‐test the change. This is also known as user testing or just plain application testing. You can achieve user acceptance two different ways:
Regardless of which method of testing you choose, a strict time frame must be communicated to the user testing the change.
Once user acceptance is obtained, it should be documented in the user acceptance section of the change management documentation. The methods of testing, the users and groups involved in testing, and the time invested in testing should be included in this section as well. Remember that the goal is the approval and successful implementation of the changes, so it is important that you are convincing and, more importantly, convinced that the change will succeed without repercussions.
As a technician you're responsible for preventing disasters that could impact the organization. You're also responsible for recovering from uncontrolled disasters. Luckily, you can prevent disasters by taking the proper precautions, as we will discuss in the following sections.
When you take steps to prevent disaster, you'll find that you're prepared when disaster strikes and can restore business continuity that much more quickly. This section discusses the following types of disasters:
Both of these types of disasters have the potential for data loss and work stoppage.
When we think of data backups, we usually relate them to disasters. However, data backups are not just used to restore from disaster; we often use data backups when a user inadvertently deletes files they shouldn't have deleted. Data backups are also used when users overwrite files or just plain forget where they put them in the first place. Regardless of how the data was lost, the underlying reason we create data backups is to recover from data loss.
Because you can't choose the disaster or situation that causes the loss of data, you should adopt a layered strategy, starting with the user and expanding outward to the infrastructure. The following sections cover several different types of strategies that can protect you from data loss.
Most of the time, your users will need to restore a single file or perhaps a few files, but definitely not the entire server or server farm. Therefore, you should make sure that one of the layers of protection allows for the restoration of individual files. You can implement this type of strategy several different ways. Depending on your resources, you should use them all.
Volume Shadow Copy, also known as the Volume Snapshot Service (VSS), has been an integral part of the Windows Server operating system since the release of Windows 2000. Volume Shadow Copy can be enabled on a volume‐by‐volume basis. Once it's turned on, all the shares on the volume are protected. You can access Volume Shadow Copy by right‐clicking a volume and selecting Properties. You can then configure it by using the Shadow Copies tab, as shown in Figure 22.12.
Volume Shadow Copy has one amazing advantage: it empowers the user to restore their own files. All the user needs to do is right‐click the file or empty space in the shared folder, select Properties, and then in the Properties window, select the Previous Versions tab. This will open a list of snapshots, as shown in Figure 22.13. The user can then double‐click the snapshots to open them as if they were currently on the filesystem. This allows the user to evaluate what they are looking for. Once they find what they are looking for, they can either click the Restore button or drag the files over to the current folder.
One limitation to Volume Shadow Copies is the number of snapshots that can be active. Only 64 snapshots can be active at one time. The oldest snapshot is deleted when a new snapshot is created to maintain a running total of 64 snapshots. By default, Volume Shadow Copy is not enabled. When it is enabled, the default schedule creates a snapshot twice a day, at 7 a.m. and 12 p.m. It's advisable to set a schedule that creates a snapshot every hour during normal business hours. This will give the user the last 64 hours of work, which could be well over a week and a half, if you were open 9–5.
FIGURE 22.12 The Shadow Copies tab
FIGURE 22.13 The Previous Versions tab
File‐based backups are a common type of backup in organizations today and have been since the introduction of backup software. The Windows Server operating system includes a backup program capable of protecting the local server, as shown in Figure 22.14. It is somewhat limited, because it only supports a file‐based destination and does not offer options for data tapes. It also only allows for the management of the local server. However, the product is free and is included with the Server operating system, so there is no reason not to have some type of backup.
FIGURE 22.14 Windows Server Backup
Advanced backup software, such as Veeam Backup & Replication and Veritas Backup Exec, allows for the centralized management of all backups. Multiple backup jobs can be created for various groups of servers and can be directed to various destinations. For example, the accounting servers might back up to a tape library unit, whereas the sales servers back up to a disk storage unit. We'll discuss media type later in this chapter, but the key takeaway is that multiple jobs can be created and executed at the same time.
Advanced backup software often requires a licensed agent to be installed on each server. Depending on the type of agent purchased, the agent might just allow for a simple backup of files, or it might allow for open files to be backed up while they are in use. Some agents even allow for the snapshot of all files so that a point‐in‐time image can be made of the filesystem. The backup is then created from the snapshot. This type of backup is common in financial institutions, where an end‐of‐day cutoff needs to be created.
Advanced backup software normally performs a pull of files from the selected source server and directs the information to the selected media. This is called the pull backup method, and it is probably the most common type of backup you will encounter. However, there are also push backup methods, in which the backup software directs the selected source server to push the files to the destination media using the backup server. This reduces the utilization on the backup server and speeds up the backup process, also known as the backup window.
Image‐based backups allow for a complete server to be backed up. This type of backup is also called a bare‐metal backup. It's called a bare‐metal backup because if the server hardware were to fail, you would restore the backup to a new server (bare‐metal) and restore it completely. The inherent problem with these types of restorations is that they require administrator intervention. However, the technology is impressive and spares you from reinstalling the server from scratch.
Virtualized environments are where image‐based backups really add value. Virtualization is changing the landscape of IT, and the area of backups is no different. When a server is virtualized, the guest virtual machine consists of configuration files, a virtual filesystem file, and other supporting files. When access is given to the underlying filesystem where the files can be directly accessed, they can be backed up. This allows for an image to be created for the current state of an operating system—files and all.
Most enterprise backup software supports image‐based backups for an additional license fee. It normally requires an agent to be installed on the host operating system, such as Microsoft Hyper‐V. In VMware environments, a VMware Consolidated Backup (VCB) proxy is required. This application proxy allows the backup software to create snapshots for the guest virtual machines and assists in backing up the virtual machine files.
So far, we've discussed how to use file server backups to protect an organization. However, an organization does not rely solely on file servers; there are many other types of servers in an organization. Examples include Microsoft SQL, for databases, and Microsoft Exchange, for email. In addition, there are several other types of applications that might be custom to an organization.
Just like file servers, Microsoft SQL and Microsoft Exchange have custom agents that are licensed. These agents allow for the data contained in the proprietary data stores to be backed up to your backup media. In addition to the backup of data, the agent starts a maintenance process at the end of a backup. This maintenance process checks the consistency of the current data store by replaying transaction logs, also called tlogs.
Critical applications for an organization do not have to be on site. As organizations adopt a cloud‐based approach to IT, they push critical applications out of the network and into the cloud. Providers such as Amazon Web Services (AWS) and Microsoft Azure can provide not only the critical applications but also backup services that are contained in the cloud.
When discussing the restoration of data, two characteristics dictate when you back up and how you back up. The concept of the recovery point objective (RPO) defines the point in time that you can restore to in the event of a disaster. The RPO is often the night before, since backup windows are often scheduled at night. The concept of the recovery time objective (RTO) defines how fast you can restore the data.
When creating a backup job, you choose what you want to back up (source) and a destination where it is to be stored. Depending on the backup software, you may have several different destinations to select from. Examples include iSCSI storage area networks (SANs), network‐attached storage (NAS), tape library units (TLUs), or even cloud‐based storage, such as Amazon S3. These are just some examples; there are many different media options for storing backups. Each backup media option uses a specific media type, and each media type has unique advantages and disadvantages. Here are the three media types commonly used for backups:
Disk‐to‐Disk Disk‐to‐disk backups have become a standard in data centers as well because of the proximity of the data and the short RTO. This type of media is usually based on site and then it is used to create an off‐site copy. It can record the data faster than traditional tape, thus shortening overall backup time. It also does not require tensioning and seeking for the data, like a tape requires.
The capacity of a disk, however, is much smaller than a tape because the drives remain in the backup unit. Data deduplication can provide a nominal 10:1 compression ratio, depending on the data. This means that 10 TB of data can be compressed on 1 TB of disk storage. So, a 10 TB storage unit could potentially back up 100 TB of data. Again, this depends on the types of files you are backing up. The more similar the data, the better the compression ratio.
Administrators will adopt a rotation schedule for long‐term archiving of data. The most popular backup rotation is grandfather, father, son (GFS). The GFS rotation defines how tapes are rotated on a first‐in, first‐out (FIFO) basis. One of the daily backups will become the weekly backup on a FIFO basis. And lastly, one of the weekly backups will become the month‐end backup. Policies should be created such as retaining 6 daily backups, 4 weekly backups, and 12 monthly backups. As you progress further away from the first six days, the RPO jumps to a weekly basis, then to a monthly basis. The benefit is that you can retain data over a longer period of time with the same number of tapes.
Backups are created for one of two main reasons: accidental deletion and disaster. Therefore, it makes sense that a disaster that could destroy your data center could also destroy the backup media. For this reason, media should be rotated off site from the on site presence of the original media.
The 3‐2‐1 backup rule method is a common method for maintaining both on‐site and off‐site backups. The 3‐2‐1 method works like this: Three instances of the data should exist at all times. The original copy of the files and a backup of the files should be on site, and the third copy of the data should be off site in the event of tragedy at the site. Here's an example: You create a business proposal on your computer (first instance), and nightly your files are backed up (second instance). You now have two instances local to your immediate site (on site) in the event of an accidental deletion. A second backup job then backs the file up to the cloud. This provides a third instance of the file, which is off site.
There are a number of ways you can achieve this method of disaster recovery. For instance, you create the file, Volume Shadow Copy snapshots the drive on the hour, and a nightly backup copies the file to the cloud for off‐site storage.
There are several options for creating file‐based backup jobs. Each backup method has advantages and disadvantages, depending on the media you are using and the amount of time in your backup window. The following are several of the backup methods you will find primarily with file‐based backups:
Over the years, we've seen fellow administrators rely on their backups—up to the point when they try to restore them. It's a very different story when they fail during a critical moment. Fortunately, this only happens to you once, and then you adopt testing strategies. You should not consider data on a backup to be safe until you have proven that it has been restored successfully. There are so many things that can go wrong with a restore, the most common being media failure.
We recommend that you perform a restore of your backup at least once a month. This will allow you to verify that you actually have data that is restorable in the event of an emergency. Many backup products actually restores the data and compares allow you to schedule a test restore. The test restore it to what is on the backup media. When it's done testing the restore, it deletes the restored data and notifies you of any discrepancies.
An uninterruptable power supply (UPS) is a battery backup system that allows for power conditioning during power sags, power surges, and power outages. A UPS should be used only until a power generator can start supplying a steady source of power. For workstations and server installations where backup generators are not available, the UPS allows enough time for systems to shut down gracefully.
UPSs are most often used incorrectly as a source of power generation during a power outage. The problem with this scenario is that there is a finite amount of power in the battery system. It may allow you some time to stay running, but if the power is out for too long, the UPS will shut down when its batteries are depleted.
UPS systems should be used to supply power while a power generator is starting up. This protects the equipment during the power sag that a generator creates during its startup after a power outage has triggered it.
There are several types of UPS systems. The main types are as follows:
Although power generators are not an objective on the 220‐1102 exam, for completeness, we want to discuss them in contrast to UPSs. Power generators supply a constant source of power during a power outage. Power generators consist of three major components: fuel, an engine, and a generator. The engine burns the fuel to turn the generator and create power. The three common sources of fuel are natural gas, gasoline, and diesel. Diesel fuel generators are the most common type of generator supplying datacenters around the world. However, natural gas generators are common for small businesses and home installation.
As mentioned in the previous section, generators require a startup period before they can supply a constant source of electricity. In addition to the startup period, there is also a switchover lag. When a power outage occurs, the transfer switch moves the load from the street power to the generator circuit. UPSs help to bridge both the lag and sag in electricity supply during the switchover and startup periods.
The power specification in North America is around 120 volts 60 Hz alternating current (AC). Normally, your voltage will be plus or minus 10 volts from 120 volts. Most equipment is rated for this variance in electricity. A power surge, however, can be upward of 500 volts for a split second, which is where damage to your equipment occurs.
A power surge can happen for a number of reasons. Two common reasons are lightning strikes and power company grid switches. A lightning strike is probably the most common reason for power surges during a storm. When the lightning hits near an electrical line, it will induce a higher voltage, which causes the surge. After a storm is over, you are still not safe from power surges. When the electrical company transfers a load back on with the power grid switches, a brief surge can sometimes be seen.
Luckily, you can protect yourself from power surges with surge protection. Surge protection can be implemented two different ways: point‐of‐use and service entrance surge protection. Surge protectors, UPSs, and power conditioners are all point‐of‐use devices, with surge protectors being the most common and obvious point‐of‐use device used for protection. Surge protectors look like common power strips but have protection circuits built in that can suppress up to 600 joules of energy. Many of them have coaxial protection for cable modems and telephone jacks, as shown in Figure 22.15. Some surge protectors even have RJ‐45 network jacks, to protect your network equipment.
FIGURE 22.15 A common surge protector
Service entrance surge protection, also called a transient voltage surge suppressor (TVSS), is normally installed by your electric company. It is commonly installed between the electrical meter and the circuit breaker box to protect you from any surges from the power grid. Most of these devices can handle over 1,000 joules of surge. These devices often come with a type of insurance from the electric company. In the event you suffer a power surge and your electronics are damaged in the process, you can submit a claim for reimbursement of the damaged equipment. Every electric company is different, so you should check before you contract these services. Figure 22.16 shows an example of a large, industrial service entrance surge protection unit.
FIGURE 22.16 An industrial service entrance surge protection unit
Disaster can strike in several different ways and is not limited to data loss or power problems. A critical admin or user account can be inadvertently deleted or you may simply forget the password. Fortunately, there are several different options, depending on the type of account involved.
Starting with Windows 8, the push to use Microsoft accounts as your primary login has been emphasized by Microsoft. When you set up Windows for the first time, the default is to use a Microsoft online account. A Microsoft account allows you to download content and applications from the Microsoft Store. It also allows you to recover your account by using Microsoft services. When you sign up for a Microsoft account, you're asked for backup email accounts and even your cell phone number for text messages. All these alternate methods of contact make it easier to recover your account if you lose your password.
If you are using a local account to log into the operating system, your options will be slightly limited. Fortunately, starting with Windows 10 version 1803, there is a built‐in option to recover a password for a local account. During the setup of the administrator account, the operating system will ask you three security questions. If you forget the password, you simply need to answer the security questions you provided during setup to reset the password, as shown in Figure 22.17.
FIGURE 22.17 Windows 10 security questions
If the local account is deleted or the password is forgotten and you are not running Windows 10 version 1803 or later, then your only option is to perform a System Restore on the operating system to restore the affected account. Unfortunately, if the user account is completely deleted, a System Restore will not bring back the user files. It will, however, restore the local user account, after which a traditional restore from it can be performed.
You have several options with domain accounts versus local accounts. The first and most obvious option is that with domain accounts you have other privileged accounts. These other privileged accounts can reset passwords if they are forgotten or locked out after too many unsuccessful attempts.
If an account is deleted, you have several options as well, but they require that you've taken preventive measures before the account is deleted. The first option for account recovery with domains is the use of the Active Directory Recycle Bin. The Recycle Bin feature first appeared in Windows Server 2008 R2, so you must be running this version of Windows Server or later. A second requirement is having the Recycle Bin enabled, since it is not enabled by default. Once the Recycle Bin is enabled, if an Active Directory user account is deleted, it will show up in the Deleted Objects container. All you need to do is right‐click the object and choose Restore.
Another way to restore Active Directory objects is from backup. Almost all Windows backup utilities have a provision for the backup of Active Directory. Even the Windows Backup utility allows for the backup of Active Directory by selecting the backup of the System State data on the domain controller. In the event that an object is deleted, most backup products allow you to restore the individual user account with a few clicks.
If you are using the Windows Backup utility, you must perform an authoritative restore, which is a little more complicated than a few clicks. The following is an overview of the steps to perform an authoritative restore with a backup program that supports only the restore of System State, such as the Windows Backup utility:
net stop ntds
.ntdsutil
utility to update the object you need to restore.As a professional technician, you need to possess a certain level of technical competence, or you'll quickly find yourself looking for work. Technical ability alone isn't enough, though; there are many people out there with skills similar to yours. One thing that can set you apart is acting like a true professional and building a solid reputation. As the noted investor Warren Buffett said, “It takes 20 years to build a reputation and 5 minutes to ruin it. If you think about that, you'll do things differently.”
You could probably break down professionalism 100 different ways. For the A+ 220‐1102 exam, and for the purposes of this chapter, we're going to break it down into two critical parts: communication and behavior.
Good communication includes listening to what the user or manager or developer is telling you and making certain that you understand completely. Approximately half of all communication should be listening. That a user or customer may not fully understand the terminology or concepts doesn't mean they don't have a real problem that needs to be addressed. Therefore, you must be skilled not only at listening but also at translating.
Professional behavior encompasses politeness, guidance, punctuality, and accountability. Always treat the customer with the same respect and empathy that you would expect if the situation were reversed. Likewise, guide the customer through the problem and the explanation. Tell them what has caused the problem they are currently experiencing and the best solution for preventing it from recurring in the future.
Demonstrating professionalism begins with your appearance and attire. You should always dress for the respect and professionalism that you deserve. It's an easy element of being professional, but it is often an overlooked element.
If you dress down, then you will be judged as being less professional than your intellect or position deserves. The customer might not communicate with you the way you'd expect, because the customer will make assumptions about your intellect or position based on your appearance. The opposite can also happen if you overdress. The customer might not give you the same respect that a technician of your caliber might expect. You could be viewed as overqualified for the customer’s needs or non‐technical.
These judgments shouldn't happen from the customer's perspective, but they often do because of the impression that appearance makes. Therefore, you should always attempt to match the required attire of the environment you are working in. For example, a technician on a construction site should have the expected attire, such as safety equipment and rugged clothes. Showing up in a suit is not what the customer would expect. Conversely, if you showed up at an office environment wearing jeans and a t‐shirt you'd be looked at as being unprofessional.
Luckily for IT professionals there are two norms of appearance and attire that are expected, depending on the environment and workplace.
The dress attire of the organization will differ slightly from the preceding definitions based on season, organization type, and even the day of the week. It is popular now for organizations to have a casual Friday that differs in definition based on the norm of business formal or business casual the rest of the week. Therefore, it is always best to check with your supervisor or coworkers about what is appropriate and what is not.
The act of diagnosis starts with the art of customer relations. Go to the customer with an attitude of trust. Believe what the customer is saying. At the same time, retain an attitude of hidden skepticism; don't believe that the customer has told you everything. This attitude of hidden skepticism is not the same as distrust. Just remember that what you hear isn't always the whole story; customers may inadvertently forget to provide some crucial details.
For example, a customer may complain that their CD‐ROM drive doesn't work. What they fail to mention is that it has never worked and that they installed it. On examining the machine, you realize that they mounted it with screws that are too long, preventing the tray from ejecting properly.
Here are a few suggestions for making your communication with the customer easier:
Use the collected information. Once the problem or problems have been clearly identified, your next step is to isolate possible causes. If the problem cannot be clearly identified, then further tests will be necessary. A common technique for hardware and software problems alike is to strip the system down to bare‐bones basics. In a hardware situation, this could mean removing all interface cards except those absolutely required for the system to operate. In a software situation, this may mean disabling elements within Device Manager.
Generally, then you can gradually rebuild the system toward the point where the trouble started. When you reintroduce a component and the problem reappears, you know that component is the one causing the problem.
Customer satisfaction goes a long way toward generating repeat business. If you can meet the customer's expectations, you will most likely hear from them again when another problem arises. However, if you can exceed the customer's expectations, you can almost guarantee that they will call you the next time a problem arises.
Customer satisfaction is important in all communications media—whether you are on site, providing phone support, or communicating through email or other correspondence. If you are on site, follow these rules:
When you finish a job, notify the user that you have finished. Make every attempt to find the user and inform them of the resolution. If you cannot find the customer, leave a note explaining the resolution.
You should also leave a means by which the customer can contact you if they have any questions about the resolution or a related problem. In most cases, you should leave your business number and, if applicable, your cell phone number, in case the customer needs to contact you after hours.
You should also notify both your manager and the user's manager that the job has been completed.
If you are providing phone support, keep the following guidelines in mind:
The most important skill that you can have is the ability to listen. You have to rely on the customer to describe the problem accurately. They cannot do that if you are second‐guessing or jumping to conclusions before the whole story is told. Ask broad questions to begin, and then narrow them down to help isolate the problem.
It is your job to help extract the description of the problem from the user. For example, you might ask the following questions:
Complaints should be handled in the same manner in which they would be handled if you were on site. Make your best effort to resolve the problem and not argue. Again, your primary goal is to keep the customer.
Close the incident only when the customer is satisfied that the solution is the correct one and the problem has gone away.
End the phone call in a courteous manner. Thanking the customer for the opportunity to serve them is often the best way.
Talking to the user is an important first step in the troubleshooting process. Your first contact with a computer that has a problem is usually through the customer, either directly or by way of a work order that contains the user's complaint. Often, the complaint is something straightforward, such as, “There's smoke coming from the back of my monitor.” At other times, the problem is complex, and the customer does not mention everything that has been going wrong. Regardless of the situation, always approach it calmly and professionally, and remember that you get only one chance to make a good first impression.
Critical to appropriate behavior is to treat the customer, or user, the way you would want to be treated. Much has been made of the Golden Rule—treating others the way you would have them treat you. Six key elements to this, from a business perspective, are punctuality, accountability, flexibility, confidentiality, respect, and privacy. The following sections discuss these elements in detail.
Punctuality is important and should be a part of your planning process. If you tell the customer that you will be there at 10:30 a.m., you need to make every attempt to be there at that time. If you arrive late, you have given them false hope that the problem will be solved by a set time. That can lead to anger because it can appear that you are not taking the problem seriously. Punctuality continues to be important throughout the service call and does not end with your arrival. If you need to leave to get parts and return, tell the customer when you will be back, and be there at that time. If for some reason you cannot return at the expected time, alert the customer and tell them when you can return.
Along those same lines, if a user asks how much longer the server will be down and you respond that it will up in five minutes only to have it down for five more hours, the result can be resentment and possibly anger. When estimating downtime, always allow for more time than you think you will need, just in case other problems occur. If you greatly underestimate the time, always inform the affected parties and give them a new time estimate. To use an analogy that will put it in perspective: if you take your car to get an oil change and the counter clerk tells you it will be “about 15 minutes,” the last thing you want is to be still sitting there four hours later. If you ever feel that you won't be able to meet the timeline you proposed, communicate that as quickly as possible. It's better to overcommunicate than to have users wondering where you are.
Exercise 22.1 is a simple drill that you can modify as needed. Its purpose is to illustrate the importance of punctuality.
Accountability is a trait that every technician should possess. When problems occur, you need to be accountable for them and not attempt to pass the buck to someone else. For example, suppose you are called to a site to put a larger hard drive into a server. While performing this operation, you inadvertently scrape your feet across the carpeted floor, build up energy, and zap the memory in the server. Some technicians would pretend the electrostatic discharge (ESD) never happened, put the new hard drive in, and then act completely baffled by the fact that problems unrelated to the hard drive are occurring. An accountable technician would explain to the customer exactly what happened and suggest ways of proceeding from that point—addressing and solving the problem as quickly and efficiently as possible.
Accountability also means that you do what you say you're going to do, ensure that expectations are set and met, and communicate the status with the customer. Here are some examples of ways to be accountable:
The last one is the most overlooked, yet it can be the most important. Some technicians fix a problem and then develop an “I hope that worked and I never hear from them again” attitude. Calling your customer back (or dropping by their desk) to ensure that everything is still working right is an amazing way to build credibility and rapport quickly.
Flexibility is another trait that's as important as the others for a service technician. You should respond to service calls promptly and close them (solve them) as quickly as you can, but you must also be flexible. If a customer cannot have you on site until the afternoon, you must make your best effort to work them into your schedule around the time most convenient for them. Likewise, if you are called to a site to solve a problem and the customer brings another problem to your attention while you are there, you should make every attempt to address that problem as well. Under no circumstances should you give a customer the cold shoulder or not respond to additional problems because they were not on an initial incident report.
It's also important that you remain flexible in dealing with challenging or difficult situations. When someone's computer has failed, they likely aren't going to be in a good mood, which can make them a “difficult customer” to deal with. In situations like these, keep in mind the following principles:
Focus on your communication skills. If you have a difficult customer, treat it as an opportunity to see how good a communicator you really are. (Maybe your next job will be a foreign ambassador.) Ask nonconfrontational, open‐ended questions. “When was the last time it worked?” is more helpful than “Did it work yesterday?” or “Did you break it this morning?” These can help you narrow down the scope of the problem.
Another good tactic here is to restate the issue or question to verify that you understand. Starting with “I understand that the problem is…” and then repeating what the customer said can show empathy and proves that you were listening. If you have it wrong, it's also a good opportunity to let your customer correct you so that you're on track to solve the right problem.
The goal of confidentiality is to prevent or minimize unauthorized access to files and folders and disclosure of data and information. In many instances, laws and regulations require confidentiality for specific information. For example, Social Security records, payroll and employee records, medical records, and corporate information are high‐value assets. This information could create liability issues or embarrassment if it fell into the wrong hands.
Over the last few years, there have been a number of cases in which bank account and credit card numbers were published on the Internet. The loss of confidence by consumers due to these types of breaches of confidentiality can far exceed the actual monetary losses from the misuse of this information.
As a computer professional, you are expected to uphold a high level of confidentiality. Should a user approach you with a sensitive issue—telling you their password, asking for assistance obtaining access to medical forms, and so on—it is your obligation as a part of your job to make certain that information goes no further.
As part of confidentiality, don't ever disclose work‐related experiences via social media. You might have had a terrible day and really want to say something like, “Wow, the people at XYZ company sure are insufferable morons,” but just don't do it. It's not professional, and it could expose you to legal action.
Much of the discussion in this chapter is focused on respecting the customer as an individual. However, you must also respect the tangibles that are important to the customer. While you may look at a monitor that they are using as an outdated piece of equipment that should be scrapped, the business owners may see it as a gift from their children when they first started their business.
Treat the customer's property as if it had value, and you will win their respect. Their property includes the system you are working on (laptop/desktop computer, monitor, peripherals, and the like) as well as other items associated with their business. Avoid using the customer's equipment, such as telephones or printers, unless it is associated with the problem you've been summoned to fix.
Another way to show respect is to focus on the task at hand and avoid distractions. For example, you should avoid the following:
As for texting or talking to coworkers, there may be times when it's appropriate for you to do so based on the situation. The key is to find the right time to do it and, if appropriate, tell the customer what you are doing. For example, after gathering information, you might say something like, “Do you mind if I give my coworker Jen a quick call? The other day she told me about a situation she had that sounded exactly like this, and I want to see if her fix worked well.” But then make the call quick and business‐focused.
Respecting the customer is not rocket science. All you need to do—for this exam and in the real world—is think of how you would want someone to treat you. Exercise 22.2 explores this topic further. This exercise, like Exercise 22.1, can be modified to fit your purpose or constraints. Its goal is to illustrate the positive power of the unexpected.
One last area to consider that directly relates to this topic is that of ethics. Ethics is the application of morality to situations. While there are different schools of thought, one of the most popular areas of study is known as normative ethics, which focuses on what is normal or practical (right versus wrong and so on). Regardless of religion, culture, and other influences, there are generally accepted beliefs that some things are wrong (stealing, murder, and the like) and some things are right (for example, the Golden Rule). You should always attempt to be ethical in everything you do, because it reflects not only on your character but also on your employer.
Although there is some overlap between confidentiality and privacy, privacy is an area of computing that is becoming considerably more regulated. As a computing professional, you must stay current with applicable laws because you're often one of the primary agents expected to ensure compliance.
Although the laws provide a minimal level of privacy, you should go out of your way to respect the privacy of your users beyond what the law establishes. If you discover information about a user that you should not be privy to, you should not share it with anyone, and you should alert the customer that their data is accessible and encourage them—if applicable—to remedy the situation. This includes information that you see on their computer, on their desk, on printers, or anywhere else in their facility.
Whether you are dealing with customers in person or on the phone, there are rules to which you should adhere. These were implied and discussed in the previous sections, but you must understand them and remember them for the exam.
Listen to your customers and take notes. Allow them to complete their statements and avoid interrupting them. People like to know that they are being heard, and as simple an act as it is, this can make all of the difference in making them feel at ease with your work.
Everyone has been in a situation where they have not been able to explain their problem fully without being interrupted or ignored. It is not enjoyable in a social setting, and it is intolerable in a business setting.
Set and meet—or exceed—expectations and communicate timelines and status. Customers want to know what is going on. They want to know that you understand the problem and can deal with it. Being honest and direct is almost always appreciated.
Deal appropriately with confidential materials. Don't look at files or printouts that you have no business looking at. Make sure the customer's confidential materials stay that way.
In this chapter, we covered ticketing systems and how they are implemented for an organization. We also covered the various documentation types that you will encounter as a technician throughout your career. Some of the documentation is created from external regulations. This, in turn, creates policies, which dictate processes and procedures. You'll create some of the documentation as you are troubleshooting a problem and some of the documentation after fixing a problem. We also covered documentation often used with the change management process.
Next, we looked at disaster prevention and recovery. The two main areas are data loss and equipment failure due to power issues. Data loss can be prevented with data backups and other user‐facing strategies, such as Volume Shadow Copy. Power problems can be prevented with the appropriate use of uninterruptable power supplies and surge protection equipment.
Finally, we moved on to professionalism and communication. You should treat your customers as you would want to be treated. Your actions and behavior should let them know that you respect them and their business.
The answers to the chapter review questions can be found in Appendix A.
You will encounter performance‐based questions on the A+ exams. The questions on the exam require you to perform a specific task, and you will be graded on whether or not you were able to complete the task. The following requires you to think creatively in order to measure how well you understand this chapter's topics. You may or may not see similar questions on the actual A+ exams. To see how your answers compare to the authors', refer to Appendix B.
A user has called in and explained they accidentally overwrote a file and need to retrieve the freshest copy of the file. Luckily, you have Volume Shadow Copy configured on the share where the file was overwritten. What are the steps to recover the file?
The answers to the chapter review questions can be found in Appendix A.
The answers to the chapter review questions can be found in Appendix A.
The answers to the chapter review questions can be found in Appendix A.
The answers to the chapter review questions can be found in Appendix A.
The answers to the chapter review questions can be found in Appendix A.
The answers to the chapter review questions can be found in Appendix A.
The answers to the chapter review questions can be found in Appendix A.
Outlook.com
.The answers to the chapter review questions can be found in Appendix A.
The answers to the chapter review questions can be found in Appendix A.
ipconfig
command is perhaps the most‐used utility in troubleshooting and network
configuration. The ipconfig /renew
command sends a query to
the DHCP server asking it to resend and renew all DHCP information. For
a more detailed look at the ipconfig
command, type
ipconfig /?
at the
command prompt. The ifconfig
command is used with Linux and
macOS clients. There are no /refresh
or /start
switches for these commands.The answers to the chapter review questions can be found in Appendix A.
cmd
or command
in the Start
menu. The command prompt utility will pop up in the search results. Run
is not a command; it is a dialog box. Open is not a command; it is an
operating system action.cmd
command starts the command‐prompt
application.The answers to the chapter review questions can be found in Appendix A.
shutdown
utility can be used to schedule a remote shutdown—for example,
shutdown /t 60 /m \\computer
. The taskmgr
utility is used to view tasks, kill
is used to kill
processes, and netstat
is used to view network statistics
and activity.eventvwr.msc
will start the Event Viewer snap‐in. The
command eventviewer.exe
is not a valid command. The command
lusrmgr.msc
will start the Local Users and Group snap‐in.
The command devmgmt.msc
will start the Device Manager
snap‐in.HKEY_LOCAL_MACHINE
Registry hive contains information about
the computer’s hardware. It is also known as HKLM
.
HKEY_CURRENT_MACHINE
and HKEY_MACHINE
are not
valid Registry hives. HKEY_RESOURCES
was used with Windows
9x operating systems but is no longer used.The answers to the chapter review questions can be found in Appendix A.
robocopy
command copies all data and includes NTFS permission to remain intact.
The xcopy
and copy
commands copy files from a
source folder to a destination folder but do not copy NTFS permissions.
The chkdsk
command is used to check the integrity of the
NTFS filesystem.pathping
command measures the packet loss at each router as the packet travels to
the destination address; it combines the ping
and
tracert
commands. The ping
command returns a
single destination’s response time. The nslookup
command is
used to resolve DNS addresses. The tracert
command allows
you to see how a packet travels to its destination.B
. The
msinfo32.exe
tool allows for the remote reporting of a
computer’s hardware. regedit.exe
is used to edit the
Registry. msconfig.exe
is used to change the startup of
services and change the boot process. dxdiag.exe
is used to
diagnose DirectX problems.chkdsk
command is used to check a volume for corruption, as well as attempt to
repair the corruption. The diskpart
command allows you to
create, modify, and view volumes on a disk. The format
command allows you to format a filesystem on a volume. The
sfc
command is used to fix corrupted files but not volume
corruption.A
. The Boot
Configuration Data is stored in the EFI System Partition on an EFI
installation of Windows. The WinRE partition is used for the Windows
Recovery Environment. Secure Boot is a feature of an EFI installation
and does not contain its own partition. The C:\WINDOWS
folder is where the installation of Windows exists.netstat
command can be used to view ports in use by the operating system and
network applications communicating with the network. The
ipconfig
command allows you to see the current IP address
and DNS information for the operating system. The pathping
allows you to view packet loss along the path to a destination IP
address. The nslookup
command is used to resolve DNS
records.The answers to the chapter review questions can be found in Appendix A.
ls ‐la
will list all the files in a long format. The
command ls ‐a | ls‐l
will not work.
ls ‐s; ls ‐a
will show two listings: one with the size and
the other with all the files. The ls ‐a\ls ‐l
command will
show two listings—one with all the files and the other in a long
format—but it will not show all the files in a long format.nano
command
is used to edit files. The ps
command lists processes
running. The rm
command removes files or directories. The
ls
command lists files and folders in the
filesystem.ip
command
can be used to edit an Ethernet connection’s configuration settings. The
dd
command is used to duplicate disks. The
apt‐get
command is used with the APT package management
system for downloading packages. The pwd
command shows the
current working directory.apt
utility
can be used to download and apply patches to a Linux installation. The
update
command is not a utility. Shell/terminal is an
interface for interacting with the operating system with the command
line. The patch
command is not a utility.chown
command is used to change ownership of a file. The cd
command changes the working directory. The chmod
command
changes permissions on files. The pwd
command displays the
current working directory.fsck
Linux
utility is used to check and repair disks. The chkdsk
utility is a Windows utility used to check and repair disks. The
du
utility is used to show the current disk usage.
dumgr
is not a utility and is a wrong answer.kill
utility can be used only at the command
line of Linux/macOS. The Task Manager is a Windows utility. Close Quit
is not a feature and therefore a wrong answer.sudo
command
can be used to run a single command as another user. The su
command allows you to change user logins at the command line. The
passwd
command changes the user’s password. The
ifconfig
command allows you to view and modify the wired
network interface.ps
command
will display a snapshot of the current running processes on a Linux
operating system. The ls
command will display a listing of
files from the working directory. The cat
command will
display the contents of a file. The su
command allows you
to change user logins at the command line.cd ..
will take you one level back from the current working
directory. The command cd .
will do nothing, because the
period signifies the current working directory. The command
cd . . .
is not a valid command. The command
cd ~
will change directories to the home directory of the
user.rwx
for the user, rw‐
for the group, and r‐‐
for
everyone else. Since the user is only a member of the group applied to
the file, they will have read and write permissions.‐p
option on
the mkdir
command allows subfolders to be created as well
as the target folder. All other answers are incorrect.The answers to the chapter review questions can be found in Appendix A.
The answers to the chapter review questions can be found in Appendix A.
.apk
(Android Package Kit) extension. Apps are developed
with a software development kit (SDK), but .sdk
is not a
valid extension. Apple iOS apps use an .ipa
(iOS App Store
Package) extension. Only the Windows desktop operating system can
execute .exe
files.The answers to the chapter review questions can be found in Appendix A.
msconfig
utility allows you to boot with basic drivers and
minimal startup of nonessential services. Enable Debugging is used by
kernel developers. Disable Driver Signature Enforcement is used to allow
an unsigned driver to load during boot. Enable Low‐Resolution Video will
boot the operating system into a VGA mode./REBUILDBCD
option can be used with the bootrec
tool to rebuild the
boot configuration data (BCD). The /FIXBOOT
option writes a
new boot sector to the system partition. The /SCANOS
option
scans all other partitions that are found to have Windows installations.
The /FIXMBR
writes a new master boot record (MBR) to the
partition.winresume.exe
is
used to load Windows from a suspended state. The Boot Configuration Data
(BCD) is used to direct Windows to boot the proper installation.
ntoskrnl.exe
is the Windows kernel.
winload.exe
is used for the normal booting of the Windows
operating system.msconfig.exe
tool can be used to
enable or disable services on startup and launch tools, but it cannot be
used to diagnose performance issues. The Device Manager MMC can be used
to view and modify devices, but it will not help diagnose performance
problems. Reliability Monitor will display the reliability of the
operating system, but it will not help diagnose problems with
performance.ntbtlog.txt
file is used to diagnose problems with bootup.
Windows Recovery Environment is
used to solve problems with Windows and is not typically used for
problems with Windows Updates. Safe mode is a boot mode that loads
minimal drivers and services.regedit
is used to modify the Registry.
bootrec
is used to repair the boot records on an operating
system installation. User Account Control (UAC) is used to control
access to administrative credentials.msconfig.exe
tool is used to modify startup programs and
launch other diagnostic tools.The answers to the chapter review questions can be found in Appendix A.
$xvar = 2
is a PowerShell statement that will load the
variable xvar
with a value of 2
. The statement
xvar = 2
is Bash syntax. The statement
xvar = 2
; is JavaScript syntax. The statement
set /a xvar=2
is Windows batch script syntax.for
loop has a
defined beginning and end, and steps from the beginning to the end. A
do while
loop is a type of while
loop and has
only a defined end. A while
loop has only a defined end. An
if
statement is branch logic, not a loop..bat
extension is used with the Windows batch scripting language. The
.vbs
extension is used with VBScript language. The
.js
extension is used with the JavaScript scripting
language. The .py
extension is used with the Python
scripting language..py
extension is used with the Python scripting language. The
.vbs
extension is used with the VBScript language. The
.js
extension is used with the JavaScript scripting
language. The .bat
extension is used with the Windows batch
scripting language..sh
extension is used with the Bash scripting language. The
.vbs
extension is used with the VBScript language. The
.bat
extension is used with the Windows batch scripting
language. The .py
extension is used with the Python
scripting language.chmod
command to grant execute
permissions. The chown
command changes ownership. There is
no such thing as an execute attribute. Adding .sh
to the
end of the script doesn’t serve any purpose.mvar = 8
; is JavaScript syntax to load a variable of
mvar
with a value of 8
. The statement
$mvar = 8
is PowerShell syntax. The statement
mvar = 8
is Bash syntax. The statement
set /a mvar=8
is Windows batch script syntax.//comment
is used to comment JavaScript code. The line
'comment
is used to comment VBScript code. The line
REM comment
is used to comment Windows batch script code.
The line # comment
is used to comment Bash script code and
PowerShell code..js
extension is used with the JavaScript scripting language. The
.sh
extension is used with the Bash scripting language. The
.bat
extension is used with the Windows batch scripting
language. The .py
extension is used with the Python
scripting language.The answers to the chapter review questions can be found in Appendix A.
The answers to the chapter review questions can be found in Appendix A.
Here is how to remove a DIMM and replace it with another one:
The components are labeled in the following illustration.
Here are the steps to remove a power supply from a computer chassis:
The answers to the Chapter 3 performance‐based question are as follows:
Here are some example steps to take to clean an inkjet printer. The process for starting the cleaning cycle on inkjet printers can vary, and some printers have both quick and deep‐clean cycles. Always check your documentation for steps specific to your printer.
Possible answers for examples of physical network topologies could include bus, ring, star, mesh, and hybrid. The simplest topology, and the one that uses the least amount of cable, is a bus. It consists of a single cable that runs to every workstation, as shown in the following illustration.
MARGIN ICON
Here is the correct matching of protocols and services to their ports:
Protocol (service) | Port(s) |
---|---|
FTP | 20, 21 |
SSH | 22 |
Telnet | 23 |
SMTP | 25 |
DNS | 53 |
DHCP | 67, 68 |
TFTP | 69 |
HTTP | 80 |
POP3 | 110 |
NetBIOS/NetBT | 137, 139 |
IMAP | 143 |
SNMP | 161, 162 |
LDAP | 389 |
HTTPS | 443 |
SMB/CIFS | 445 |
RDP | 3389 |
Here are the steps to install a PCIe network card for a Windows 10 desktop:
Put the case back on the computer and power it up.
Windows Plug and Play (PnP) should recognize the NIC and install the driver automatically. It may also ask you to provide a copy of the necessary driver if it does not recognize the type of NIC that you have installed.
If Windows does not start the installation routine immediately, you can add it manually.
Click Start ➢ Settings (it looks like a gear) ➢ Devices ➢ Bluetooth & Other Devices, and then click the plus sign next to Add Bluetooth Or Other Device.
That will bring up the Add A Device window.
When Windows finds the NIC, choose it and continue the installation.
After installing a NIC, you must hook the card to the network using the appropriate cable (if you're using wired connections).
To enable Microsoft Hyper‐V, perform the following steps:
Here is how to replace the hard drive in the example laptop computer:
Here are the steps to connect an iPhone to a Wi‐Fi network:
The correct order for the best practice methodology is shown here. Getting the sub‐steps in the exact order isn't critical, but getting the major steps in order and the right sub‐steps under the correct major step is.
In order to accommodate the future requirement of BranchCache, your organization will need to purchase a volume license agreement with Microsoft. The BranchCache feature is only available in Windows 10 Enterprise. Windows 8.1 Pro is a retail operating system that can be upgraded to Windows 10 Enterprise. However, the upgrade will require a different 25‐digit product key and activation of the Windows 10 Enterprise operating system.
To check to see which background processes are running and the resources they are using, open Task Manager. Do so by pressing Ctrl+Alt+Delete and selecting Task Manager. You can also press Ctrl+Shift+Esc.
Once in Task Manager, click the Processes tab, as shown in Figure 14.85. If all the processes are not shown, then expand the More Details chevron in the lower left, if it is not already expanded. Click the CPU column header to sort by CPU usage. If a process is taking up a considerable amount of CPU time, you can highlight it and click End Process to shut it down. You can also sort by memory used and shut down processes that look to be using excessive amounts of memory. Note that shutting down critical processes may cause Windows to lock up or otherwise not work properly, so be careful what you choose to terminate. Note also that right‐clicking one of the processes offers the End Process Tree option—a useful option when the process being killed is associated with others.
FIGURE 14.85 Windows Task Manager
Assuming you are on the version of Windows 11, the process is as follows:
FIGURE 15.55 The Set Up A Work Or School Account dialog box
FIGURE 15.56 Domain dialog box
FIGURE 15.57 Domain administrative credentials dialog box
FIGURE 15.58 Local administrative rights dialog box
The listing you see when typing these commands will differ based on
such factors as the system, the directory, your permissions, and the
files/subdirectories present, but in all cases, there will be entries
present with the –a
option that do not appear in the
display without it. Among those listings that now appear are a single
period (representing the present directory) and a double period
(representing the parent directory), as shown in Figure 16.25.
If there are any files or directories starting with a period, they
will now appear where they did not before. The easiest way to “hide” a
file or directory in Linux is to start the name of it with a period;
thus, it will not show up in a listing unless the –a
option
is used. An example of this is shown in Figure 16.26.
FIGURE 16.25 An example of hidden files in various directories
FIGURE 16.26 An example of hiding files in Linux
A simple 8‐character alphanumeric password contains 0–9 for a total of 10 characters, 26 uppercase and 26 lowercase characters. This gives you a total of 52 letters and 10 numbers, for a total of 62 combinations per character: 62 to the power of 8, or 62 × 62 × 62 × 62 × 62 × 62 × 62 × 62 = 218,340,105,584,896 combinations. A 25‐character alphanumeric password with symbols contains 95 combinations per character; 95 to the power of 25 is 2.77 x 1054 combinations. If you are using a calculator, you might see 2.7738957e+49 as a result. Although the exact math is not significant, the deep understanding of combinations and complexity is the underlying lesson.
The following explains how you would achieve the goal:
To write a PowerShell script to find other scripts in a user profile
directory and all its subdirectories, you need the $home
environment variable. There are a number of ways of writing this script
to achieve the solution. The following is just one of the possibilities,
using the parameters found on the Microsoft website:
Get-ChildItem -Path $home\* -Include *.ps1 -Recurse
Here are some steps to take to look for trip hazards and eliminate them:
FIGURE 22.18 The previous version of a file
FIGURE 22.19 Confirming a restore from a previous version
Register to gain one year of FREE access after activation to the online interactive test bank to help you study for your CompTIA A+ certification exams—included with your purchase of this book! All of the chapter review questions and the practice tests in this book are included in the online test bank so you can practice in a timed and graded setting.
To register your book and get access to the online test bank, follow these steps:
www.wiley.com/go/sybextestprep
.www.wiley.com/go/sybextestprep
.
Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.