Banner

Monday, August 24, 2015

IT BOOK & Solution For CMA Student

CMA  IT Book and Solution Download



                                          CMA April 2013
1. What is software ? Classification of  software ?

Ans: Computer software, or simply software is any set of machine-readable instructions that directs a computer's processor to perform specific operations. Computer software contrasts with computer hardware, which is the physical component of computers. Computer hardware and software require each other and neither can be realistically used without the other.

Computer software includes computer programs, libraries and their associated documentation. The word software is also sometimes used in a more narrow sense, meaning application software only. Software is stored in computer memory and cannot be touched i.e. it is intangible.

Types of Software :

Purpose, or domain of use

Based on the goal, computer software can be divided into:
  • Application software, which uses the computer system to perform useful work or provide entertainment functions beyond the basic operation of the computer itself. There are many different types of application software, because the range of tasks that can be performed with a modern computer is so large - see list of software.
  • System software, which is designed to directly operate the computer hardware, to provide basic functionality needed by users and other software, and to provide a platform for running application software.[3] System software includes:
    • Operating systems, which are essential collections of software that manage resources and provides common services for other software that runs "on top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer that only has an operating system.
    • Device drivers, which operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more than one device driver.
    • Utilities, which are computer programs designed to assist users in maintenance and care of their computers.
  • Malicious software or malware, which are computer programs developed to harm and disrupt computers. As such, malware is undesirable. Malware is closely associated with computer-related crimes, though some malicious programs may have been designed as practical jokes.

 Nature, or domain of execution
  • Desktop applications such as web browsers and Microsoft Office, as well as smartphone and tablet applications (called "apps"). (There is a push in some parts of the software industry to merge desktop applications with mobile apps, to some extent. Windows 8, and later Ubuntu Touch, tried to allow the same style of application user interface to be used on desktops and laptops, mobile devices, and hybrid tablets.)
  • JavaScript scripts are pieces of software traditionally embedded in web pages that are run directly inside the web browser when a web page is loaded, without the need for a web browser plugin. Software written in other programming languages can also be run within the web browser if the software is either translated into JavaScript, or if a web browser plugin that supports that language is installed; the most common example of the latter is ActionScript scripts, which are supported by the Adobe Flash plugin.
  • Server software, including:
  • Plugins and extensions are software that extends or modifies the functionality of another piece of software, and require that software be used in order to function;

  • Embedded software resides as firmware within embedded systems, devices dedicated to a single use or a few uses such as cars and televisions (although some embedded devices such as wireless chipsets can themselves be part of an ordinary, non-embedded computer system such as a PC or smartphone).[4] In the embedded system context there is sometimes no clear distinction between the system software and the application software. However, some embedded systems run embedded operating systems, and these systems do retain the distinction between system software and application software (although typically there will only be one, fixed, application which is always ran).

  • Microcode is a special, relatively obscure type of embedded software which tells the processor itself how to execute machine code, so it is actually a lower level than machine code.[5] It is typically proprietary to the processor manufacturer, and any necessary correctional microcode software updates are supplied by them to users (which is much cheaper than shipping replacement processor hardware). Thus an ordinary programmer would not expect to ever have to deal with it.








    2. What is executive support system ? write down the components of it ?  What are its benefits and limitations?

    Ans :
    Executive support system :

    An executive information system (EIS) is a type of management information system that facilitates and supports senior executive information and decision-making needs. It provides easy access to internal and external information relevant to organizational goals. It is commonly (PUTI) considered a specialized form of decision support system (DSS).[1]
EIS emphasizes graphical displays and easy-to-use user interfaces. They offer strong reporting and drill-down capabilities. In general, EIS are enterprise-wide DSS that help top-level executives analyze, compare, and highlight trends in important variables so that they can monitor performance and identify opportunities and problems. EIS and data warehousing technologies are converging in the marketplace.
In recent years, the term EIS has lost popularity in favor of business intelligence (with the sub areas of reporting, analytics, and digital dashboards).

Components of ES:
EIS components can typically be classified as:
  • Hardware
  • Software
  • User interface
  • Telecommunications
Hardware
When talking about computer hardware for an EIS environment, we should focus on the hardware that meet the executive’s needs. The executive must be put first and the executive’s needs must be defined before the hardware can be selected. The basic hardware needed for a typical EIS includes four components:
  1. Input data-entry devices. These devices allow the executive to enter, verify, and update data immediately
  2. The central processing unit (CPU), which is the important because it controls the other computer system components
  3. Data storage files. The executive can use this part to save useful business information, and this part also help the executive to search historical business information easily
  4. Output devices, which provide a visual or permanent record for the executive to save or read. This device refers to the visual output device such as monitor or printer
In addition, with the advent of local area networks (LAN), several EIS products for networked workstations became available. These systems require less support and less expensive computer hardware. They also increase EIS information access to more company users.
Software
Choosing the appropriate software is vital to an effective EIS.[citation needed] Therefore, the software components and how they integrate the data into one system are important. A typical EIS includes four software components:
  1. Text-handling software—documents are typically text-based
  2. Database—heterogeneous databases on a range of vendor-specific and open computer platforms help executives access both internal and external data
  3. Graphic base—graphics can turn volumes of text and statistics into visual information for executives. Typical graphic types are: time series charts, scatter diagrams, maps, motion graphics, sequence charts, and comparison-oriented graphs (i.e., bar charts)
  4. Model base—EIS models contain routine and special statistical, financial, and other quantitative analysis
User interface
An EIS must be efficient to retrieve relevant data for decision makers, so the user interface is very important. Several types of interfaces can be available to the EIS structure, such as scheduled reports, questions/answers, menu driven, command language, natural language, and input/output.
Telecommunication

As decentralizing is becoming the current trend in companies, telecommunications will play a pivotal role in networked information systems. Transmitting data from one place to another has become crucial for establishing a reliable network. In addition, telecommunications within an EIS can accelerate the need for access to distributed data.




Advantages of EIS

  • Easy for upper-level executives to use, extensive computer experience is not required in operations
  • Provides timely delivery of company summary information
  • Information that is provided is better understood
  • EIS provides timely delivery of information. Management can make decisions promptly.
  • Improves tracking information
  • Offers efficiency to decision makers
Disadvantages of EIS
  • System dependent
  • Limited functionality, by design
  • Information overload for some managers
  • Benefits hard to quantify
  • High implementation costs
  • System may become slow, large, and hard to manage
  • Need good internal processes for data management
May lead to less reliable and less secure data.


3 . Why student of CMA should study information system?

Ans: Every business, program or system must address well-defined objectives, which will add value, either directly to the bottom line or toward the achievement of the organization's goals and objectives. Good management information objectives usually fall into one of three categories:
  • Service (effective and efficient),
  • Profit (or cost-avoidance, and
  • Social (moral, ethical and legal) responsibility.
A good management information system will only reap the benefits if the companies gain insight to better align strategies and identify critical relationships and gaps along four key company dimensions – people, process, culture and infrastructure.
A good information system provides a framework for companies to evaluate themselves relative to these dimensions. By understanding and improving alignment with these critical dimensions, companies can maximize the value and impact of information as a strategic corporate asset to gain competitive advantage.
The following are the most important reasons to have a good management information system:
1. To control the creation and growth of records
Despite decades of using various non-paper storage media, the amount of paper in our offices continues to escalate. An effective records information system addresses both creation control (limits the generation of records or copies not required to operate the business) and records retention (a system for destroying useless records or retiring inactive records), thus stabilizing the growth of records in all formats.
2. To reduce operating costs
Recordkeeping requires administrative dollars for filing equipment, space in offices, and staffing to maintain an organized filing system (or to search for lost records when there is no organized system).
It costs considerably less per linear foot of records to store inactive records in a Data Records Center versus in the office. [Multiply that by 30% to 50% of the records in an office that doesn't have a records management program in place], and there is an opportunity to effect some cost savings in space and equipment, and an opportunity to utilize staff more productively - just by implementing a records management program. 
3. To improve efficiency and productivity
Time spent searching for missing or misfiled records is non-productive. A good records management program (e.g. a document system) can help any organization upgrade its recordkeeping systems so that information retrieval is enhanced, with corresponding improvements in office efficiency and productivity. A well designed and operated filing system with an effective index can facilitate retrieval and deliver information to users as quickly as they need it.
Moreover, a well managed information system acting as a corporate asset enables organizations to objectively evaluate their use of information and accurately lay out a roadmap for improvements that optimize business returns.
4. To assimilate new records management technologies
A good records management program provides an organization with the capability to assimilate new technologies and take advantage of their many benefits. Investments in new computer systems whether this is financial, business or otherwise, don't solve filing problems unless current manual recordkeeping or bookkeeping systems are analyzed (and occasionally, overhauled) before automation is applied.
5. To ensure regulatory compliance
In terms of recordkeeping requirements, China is a heavily regulated country. These laws can create major compliance problems for businesses and government agencies since they can be difficult to locate, interpret and apply. The only way an organization can be reasonably sure that it is in full compliance with laws and regulations is by operating a good management information system which takes responsibility for regulatory compliance, while working closely with the local authorities. Failure to comply with laws and regulations could result in severe fines, penalties or other legal consequences.
6. To minimize litigation risks
Business organizations implement management information systems and programs in order to reduce the risks associated with litigation and potential penalties. This can be equally true in Government agencies. For example, a consistently applied records management program can reduce the liabilities associated with document disposal by providing for their systematic, routine disposal in the normal course of business.
7. To safeguard vital information
Every organization, public or private, needs a comprehensive program for protecting its vital records and information from catastrophe or disaster, because every organization is vulnerable to loss. Operated as part of a good management information system, vital records programs preserve the integrity and confidentiality of the most important records and safeguard the vital information assets according to a "Plan" to protect the records.  This is especially the case for financial information whereby ERP (Enterprise Resource Planning) systems are being deployed in large companies.
8. To support better management decision making
In today's business environment, the manager that has the relevant data first often wins, either by making the decision ahead of the competition, or by making a better, more informed decision. A good management information system can help ensure that managers and executives have the information they need when they need it.
By implementing an enterprise-wide file organization, including indexing and retrieval capability, managers can obtain and assemble pertinent information quickly for current decisions and future business planning purposes.  Likewise, implementing a good ERP system to take account of all the business’ processes both financial and operational will give an organization more advantages than one who was operating a manual based system.
9. To preserve the corporate memory
An organization's files, records and financial data contain its institutional memory, an irreplaceable asset that is often overlooked. Every business day, you create the records, which could become background data for future management decisions and planning. 
10. To foster professionalism in running the business
A business office with files, documents and financial data askew, stacked on top of file cabinets and in boxes everywhere, creates a poor working environment. The perceptions of customers and the public, and "image" and "morale" of the staff, though hard to quantify in cost-benefit terms, may be among the best reasons to establish a good management information system.


4. What are the major types of information system ?

Ans :
An information system is a collection of hardware, software, data, people and procedures that are designed to generate information that supports the day-to-day, short-range, and long-range activities of users in an organization.  Information systems generally are classified into five categories:  office information systems, transaction processing systems, management information systems, decision support systems, and expert systems.  The following sections present each of these information systems.
 1. Office Information Systems
  An office information system, or OIS (pronounced oh-eye-ess), is an information system that uses hardware, software and networks to enhance work flow and facilitate communications among employees.  Win an office information system, also described as office automation; employees perform tasks electronically using computers and other electronic devices, instead of manually.  With an office information system, for example, a registration department might post the class schedule on the Internet and e-mail students when the schedule is updated.  In a manual system, the registration department would photocopy the schedule and mail it to each student’s house.
 An office information system supports a range of business office activities such as creating and distributing graphics and/or documents, sending messages, scheduling, and accounting.  All levels of users from executive management to nonmanagement employees utilize and benefit from the features of an OIS.
 The software an office information system uses to support these activities include word processing, spreadsheets, databases, presentation graphics, e-mail, Web browsers, Web page authoring, personal information management, and groupware.  Office information systems use communications technology such as voice mail, facsimile (fax), videoconferencing, and electronic data interchange (EDI) for the electronic exchange of text, graphics, audio, and video.  An office information system also uses a variety of hardware, including computers equipped with modems, video cameras, speakers, and microphones; scanners; and fax machines.
 2. Transaction Processing Systems
 A transaction processing system (TPS) is an information system that captures and processes data generated during an organization’s day-to-day transactions.  A transaction is a business activity such as a deposit, payment, order or reservation.
 Clerical staff typically perform the activities associated with transaction processing, which include the following:
 1.                   Recording a business activity such as a student’s registration, a customer’s order, an employee’s timecard or a client’s payment.
 2.                   Confirming an action or triggering a response, such as printing a student’s schedule, sending a thank-you note to a customer, generating an employee’s paycheck or issuing a receipt to a client.
 3.                   Maintaining data, which involves adding new data, changing existing data, or removing unwanted data.
 Transaction processing systems were among the first computerized systems developed to process business data – a function originally called data processing.  Usually, the TPS computerized an existing manual system to allow for faster processing, reduced clerical costs and improved customer service.
 The first transaction processing systems usually used batch processing.  With batch processing, transaction data is collected over a period of time and all transactions are processed later, as a group.  As computers became more powerful, system developers built online transaction processing systems.  With online transaction processing (OLTP) the computer processes transactions as they are entered.  When you register for classes, your school probably uses OLTP.  The registration administrative assistant  enters your desired schedule and the computer immediately prints your statement of classes.  The invoices, however, often are printed using batch processing, meaning all student invoices are printed and mailed at a later date.
 Today, most transaction processing systems use online transaction processing.  Some routine processing tasks such as calculating paychecks or printing invoices, however, are performed more effectively on a batch basis.  For these activities, many organizations still use batch processing techniques.
 3. Management Information Systems
 While computers were ideal for routine transaction processing, managers soon realized that the computers’ capability of performing rapid calculations and data comparisons could produce meaningful information for management.  Management information systems thus evolved out of transaction processing systems.  A management information system, or MIS (pronounced em-eye-ess), is an information system that generates accurate, timely and organized information so managers and other users can make decisions, solve problems, supervise activities, and track progress.  Because it generates reports on a regular basis, a management information system sometimes is called a management reporting system (MRS).

Management information systems often are integrated with transaction processing systems.  To process a sales order, for example, the transaction processing system records the sale, updates the customer’s account balance, and makes a deduction from inventory.  Using this information, the related management information system can produce reports that recap daily sales activities; list customers with past due account balances; graph slow or fast selling products; and highlight inventory items that need reordering.  A management information system focuses on generating information that management and other users need to perform their jobs.
 An MIS generates three basic types of information:  detailed, summary and exception.  Detailed information  typically confirms transaction processing activities.  A Detailed Order Report is an example of a detail report.  Summary information consolidates data into a format that an individual can review quickly and easily.  To help synopsize information, a summary report typically contains totals, tables, or graphs.  An Inventory Summary Report is an example of a summary report.
 Exception information filters data to report information that is outside of a normal condition.  These conditions, called the exception criteria, define the range of what is considered normal activity or status.  An example of an exception report is an Inventory Exception Report is an Inventory Exception Report that notifies the purchasing department of items it needs to reorder.  Exception reports help managers save time because they do not have to search through a detailed report for exceptions.  Instead, an exception report brings exceptions to the manager’s attention in an easily identifiable form.  Exception reports thus help them focus on situations that require immediate decisions or actions.
 4. Decision Support Systems
 Transaction processing and management information systems provide information on a regular basis.  Frequently, however, users need information not provided in these reports to help them make decisions.  A sales manager, for example, might need to determine how high to set yearly sales quotas based on increased sales and lowered product costs.  Decision support systems help provide information to support such decisions.
 A decision support system (DSS) is an information system designed to help users reach a decision when a decision-making situation arises.  A variety of DSSs exist to help with a range of decisions. 
 A decision support system uses data from internal and/or external sources.
 Internal sources of data might include sales, manufacturing, inventory, or financial data from an organization’s database.  Data from external sources could include interest rates, population trends, and costs of new housing construction or raw material pricing.  Users of a DSS, often managers, can manipulate the data used in the DSS to help with decisions.
 Some decision support systems include query language, statistical analysis capabilities, spreadsheets, and graphics that help you extract data and evaluate the results.   Some decision support systems also include capabilities that allow you to create a model of the factors affecting a decision.  A simple model for determining the best product price, for example, would include factors for the expected sales volume at each price level.  With the model, you can ask what-if questions by changing one or more of the factors and viewing the projected results.  Many people use application software packages to perform DSS functions.  Using spreadsheet software, for example, you can complete simple modeling tasks or what-if scenarios.

A special type of DSS, called an executive information system (EIS), is designed to support the information needs of executive management.  Information in an EIS is presented in charts and tables that show trends, ratios, and other managerial statistics.  Because executives usually focus on strategic issues, EISs rely on external data sources such as the Dow Jones News/Retrieval service or the Internet.  These external data sources can provide current information on interest rates, commodity prices, and other leading economic indicators.
 To store all the necessary decision-making data, DSSs or EISs often use extremely large databases, called data warehouses.  A data warehouse stores and manages the data required to analyze historical and current business circumstances.
 5. Expert Systems
 An expert system  is an information system that captures and stores the knowledge of human experts and then imitates human reasoning and decision-making processes for those who have less expertise.  Expert systems are composed of two main components:  a knowledge base and inference rules.  A knowledge base is the combined subject knowledge and experiences of the human experts.  The inference rules are a set of logical judgments applied to the knowledge base each time a user describes a situation to the expert system.
 Although expert systems can help decision-making at any level in an organization, nonmanagement employees are the primary users who utilize them to help with job-related decisions.  Expert systems also successfully have resolved such diverse problems as diagnosing illnesses, searching for oil and making soup.
 Expert systems are one part of an exciting branch of computer science called artificial intelligence.  Artificial intelligence (AI) is the application of human intelligence to computers.  AI technology can sense your actions and, based on logical assumptions and prior experience, will take the appropriate action to complete the task.  AI has a variety of capabilities, including speech recognition, logical reasoning, and creative responses.
 Experts predict that AI eventually will be incorporated into most computer systems and many individual software applications.  Many word processing programs already include speech recognition.

Integrated Information Systems
 With today’s sophisticated hardware, software and communications technologies, it often is difficult to classify a system as belonging uniquely to one of the five information system types discussed.  Much of today’s application software supports transaction processing and generates management information.  Other applications provide transaction processing, management information, and decision support.  Although expert systems still operate primarily as separate systems, organizations increasingly are consolidating their information needs into a single, integrated information system.

Description: C:\Users\mithun\Downloads\Four-Level-Pyramid-model.png

5. Describe the role of CIO ?
Ans:

 Chief Information Officer
Role
The Chief Information Officer’s role is to provide vision and leadership for developing and implementing information technology initiatives that align with the mission of FOCUS. The Chief Information Officer directs the planning and implementation of enterprise IT systems in support of FOCUS operations in order to improve cost effectiveness, service quality, and mission development. This individual is responsible for all aspects of the FOCUS information technology and systems.
Responsibilities

Strategy & Planning
Participate in strategic and operational governance processes of FOCUS as a member of the senior management team.
Lead IT strategic and operational planning to achieve FOCUS goals by fostering innovation, prioritizing IT initiatives, and coordinating the evaluation, deployment, and management of current and future IT systems across the organization.
Develop and maintain an appropriate IT organizational structure that supports the needs of the business.
Establish IT departmental goals, objectives, and operating procedures.
Identify opportunities for the appropriate and cost-effective investment of financial resources in IT systems and resources, including staffing, sourcing, purchasing, and in-house development.
Assess and communicate risks associated with IT investments.
Develop, track, and control the information technology annual operating and capital budgets.
Develop business case justifications and cost/benefit analyses for IT spending and initiatives.
Direct development and execution of an enterprise-wide disaster recovery and business continuity plan.
Assess and make recommendations on the improvement or re-engineering of the IT organization.

Acquisition & Deployment
Coordinate and facilitate consultation with stakeholders to define business and systems requirements for new technology implementations.
Approve, prioritize, and control projects and the project portfolio as they relate to the selection, acquisition, development, and installation of major information systems.
Review hardware and software acquisition and maintenance contracts and pursue master agreements to capitalize on economies of scale.
Define and communicate corporate plans, policies, and standards for the organization for acquiring, implementing, and operating IT systems.

Operational Management
Ensure continuous delivery of IT services through oversight of service level agreements with end users and monitoring of IT systems performance.
Ensure IT system operation adheres to applicable laws and regulations.
Establish lines of control for current and proposed information systems.
Keep current with trends and issues in the IT industry, including current technologies and prices. Advise, counsel, and educate executives and management on their competitive or financial impact.
Promote and oversee strategic relationships between internal IT resources and external entities.
Supervise recruitment, development, retention, and organization of all IT staff in accordance with corporate budgetary objectives and personnel policies.

6.
 Features of 4th generation programming language
Ans:
Fourth generation programming languages are designed to achieve a specific goal (such as to develop commercial business applications). 4GL preceded 3rd generation programming languages (which were already very user friendly). 4GL surpassed 3GL in user-friendliness and its higher level of abstraction. This is achieved through the use of words (or phrases) that are very close to English language, and sometimes using graphical constructs such as icons, interfaces and symbols. By designing the languages according to the needs of the domains, it makes it very efficient to program in 4GL. Furthermore, 4GL rapidly expanded the number of professionals who engage in application development. Many fourth generation programming languages are targeted towards processing data and handling databases, and are based on SQL.

1.They possess friendly interfaces.
2.
They are easier to use than previously used high level languages
3.
The programming language contained within a 4GL is closely linked to the English language structure
4.
The downside of a 4GL is that the programs run slower than those of earlier language generations because their machine code equivalent is considerably longer and more complicated to execute

7.  What is data modeling? What is its purpose? Briefly describe three commonly used data models ?  

Ans:  Data Modeling :
Data modeling is the act of exploring data-oriented structures. Like other modeling artifacts data models can be used for a variety of purposes, from high-level conceptual models to physical data models.  From the point of view of an object-oriented developer data modeling is conceptually similar to class modeling. With data modeling you identify entity types whereas with class modeling you identify classes. Data attributes are assigned to entity types just as you would assign attributes and operations to classes.  There are associations between entities, similar to the associations between classes – relationships, inheritance, composition, and aggregation are all applicable concepts in data modeling -

The role of data models

Description: http://upload.wikimedia.org/wikipedia/commons/thumb/9/93/3-4_Data_model_roles.svg/320px-3-4_Data_model_roles.svg.png

How data models deliver benefit.
[4]
The main aim of data models is to support the development of information systems by providing the definition and format of data. According to West and Fowler (1999) "if this is done consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data. The results of this are indicated above. However, systems and interfaces often cost more than they should, to build, operate, and maintain. They may also constrain the business rather than support it. A major cause is that the quality of the data models implemented in systems and interfaces is poor".[4]
  • "Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces".[4]
  • "Entity types are often not identified, or incorrectly identified. This can lead to replication of data, data structure, and functionality, together with the attendant costs of that duplication in development and maintenance".[4]
  • "Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25-70% of the cost of current systems".[4]
  • "Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data has not been standardised. For example, engineering design data and drawings for process plant are still sometimes exchanged on paper".[4]
The reason for these problems is a lack of standards that will ensure that data models will both meet business needs and be consistent.[4] According to Hoberman (2009), "A data model is a wayfinding tool for both business and IT professionals, which uses a set of symbols and text to precisely explain a subset of real information to improve communication within the organization and thereby lead to a more flexible and stable application environment."[2]
A data model explicitly determines the structure of data or structured data. Typical applications of data models include database models, design of information systems, and enabling exchange of data. Usually data models are specified in a data modeling language.[3]
Communication and precision are the two key benefits that make a data model important to applications that use and exchange data. A data model is the medium which project team members from different backgrounds and with different levels of experience can communicate with one another. Precision means that the terms and rules on a data model can be interpreted only one way and are not ambiguous.[2]
A data model can be sometimes referred to as a data structure, especially in the context of programming languages. Data models are often complemented by function models, especially in the context of enterprise models.
  • Types of data models :
    A database model is a specification describing how a database is structured and used. Several such models have been suggested. Common models include:
    Flat model: This may not strictly qualify as a data model. The flat (or table) model consists of a single, two-dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another.
  • Hierarchical model: In this model data is organized into a tree-like structure, implying a single upward link in each record to describe the nesting, and a sort field to keep the records in a particular order in each same-level list.
  • Network model: This model organizes data using two fundamental constructs, called records and sets. Records contain fields, and sets define one-to-many relationships between records: one owner, many members.
  • Relational model: is a database model based on first-order predicate logic. Its core idea is to describe a database as a collection of predicates over a finite set of predicate variables, describing constraints on the possible values and combinations of values.
  • Description: http://upload.wikimedia.org/wikipedia/commons/thumb/3/39/Company_codm.gif/120px-Company_codm.gif
Concept-oriented model
  • Description: http://upload.wikimedia.org/wikipedia/commons/thumb/b/bb/Star-schema.png/120px-Star-schema.png
Star schema is the simplest style of data warehouse schema. The star schema consists of a few "fact tables" (possibly only one, justifying the name) referencing any number of "dimension tables". The star schema is considered an important special case of the snowflake schema.


8. Difference between DDL and DML ?
Ans: DDL - Defines the database structure or schema. The DDL commands are
create
alter
drop
truncate
comment
rename

DDL commands are auto-commit.

DML - Manages data within schema objects. The DML commands are
update
delete
insert
merge
call
explain plan
lock table

  • DML commands are not auto-commit.

·         DDL triggers and DML triggers are used for different purposes.
·         DML triggers operate on INSERT, UPDATE, and DELETE statements, and help to enforce business rules and extend data integrity when data is modified in tables or views.
·         DDL triggers operate on CREATE, ALTER, DROP, and other DDL statements and stored procedures that perform DDL-like operations. They are used to perform administrative tasks and enforce business rules that affect databases. They apply to all commands of a single type across a database, or across a server.
·         DML triggers and DDL triggers are created, modified, and dropped by using similar Transact-SQL syntax, and share other similar behavior.
·         Like DML triggers, DDL triggers can run managed code packaged in an assembly that was created in the Microsoft .NET Framework and uploaded in SQL Server. For more information, see Programming CLR Triggers.
·         Like DML triggers, more than one DDL trigger can be created on the same Transact-SQL statement. Also, a DDL trigger and the statement that fires it are run within the same transaction. This transaction can be rolled back from within the trigger. Serious errors can cause a whole transaction to be automatically rolled back. DDL triggers that are run from a batch and explicitly include the ROLLBACK TRANSACTION statement will cancel the whole batch. For more information, see Using DML Triggers That Include COMMIT or ROLLBACK TRANSACTION.





9
. What is a hypermedia database? How does it differ from a traditional database? How is it used for the web?
Ans :

10. What are controls ? What are general control and what are application control ?

Ans:
In business and accounting, information technology controls (or IT controls) are specific activities performed by persons or systems designed to ensure that business objectives are met. They are a subset of an enterprise's internal control. IT control objectives relate to the confidentiality, integrity, and availability of data and the overall management of the IT function of the business enterprise. IT controls are often described in two categories: IT general controls (ITGC) and IT application control
 IT
General Controls – are policies and procedures that relate to many applications and support the effective functioning of application controls by helping to ensure the continued proper operation of information systems. These controls apply to mainframe, server, and end-user environments. General IT controls commonly include:
• Controls over data centre and network operations
• System software acquisition, change and maintenance
• Access security
• Application system acquisition, development, and maintenance.
• Physical security of assets, including adequate safeguards such as secured facilities over access to assets and records,
• Authorization for access to computer programs and data files.
Separation of the duties performed by analysts, programmers and operators is another important IT general control. The general idea is that anyone who designs a processing system should not do the technical programming work, and anyone who performs either of these tasks should not be the computer operator when “live” data are being processed. Persons performing each function should not have access to the equipment. Computer systems are susceptible to manipulative handling, and the lack of separation of duties along the lines described should be considered a serious weakness in general control. The control group or similar monitoring by the user departments can be an important compensating factor for weaknesses arising from lack of separation of duties in computerized systems”.
IT General Controls are one of the most important areas to review, especially as part of the CEO / CFO Certification at publicly listed entities in Canada. It makes sense – almost all business use some form of ERP system including automated financial reporting systems. The accuracy and reliability of financial reporting depend to a large extent on the IT controls that an organization has in place.
IT Application Controls – these are controls that relate to specific computer software applications and the individual transactions. For example, a company would usually place restrictions on which personnel have authorization to access its general ledger so as to revise its chart of accounts, posting / approving journal entries etc. In order to enact this policy and restrict access, the general ledger software package would require the necessary functionality. Furthermore, assuming the functionality exists, does the company have a policy in place, and is there evidence that the general ledger authorizations align with the policy? Controls around application access are obviously very important and need to be reviewed closely as part of the certification process.
The literature and regulations pertaining to the review and testing of IT Application controls by auditors and management, addresses 3 types of application controls; Input Controls (transactions captured, accurately recorded, and properly authorized), Processing Controls (transaction processing has been performed as intended), and Output Controls (accuracy of processing result). These control tests are typically performed when a new system has been implemented. Afterwards, once the controls have been confirmed to be operating effectively, for purposes of expediency, the focus tends to be on the “key” controls, such as who has system access to make changes to the various applications, and are the policies being followed.
In my experience, IT Application controls are extremely important to monitor. Consider the impact of in-correct pricing on reported revenues. The employees that have access to change pricing within the ERP system should be authorized by the appropriate level of management. A list of employees having access to pricing modifications should be reviewed periodically. Furthermore, the system should be secure so that only authorized employees can have access. This may sound very logical and straightforward, but without ongoing vigilance and monitoring by management, it is very likely that some unauthorized employees may have access. Incorrect pricing leads to incorrect revenues. Remember – revenue recognition has been cited as the number one cause of errors regarding financial reporting.
I hope this helps to bridge the understanding the theory behind IT General and IT Application Controls, and the practical realities and basic requirements that business should be aware of.
11. What is IT security ? How to develop a disaster recovery plan?
Ans:
 Information security, sometimes shortened to InfoSec, is the practice of defending information from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction. It is a general term that can be used regardless of the form the data may take (electronic, physical, etc.)
he definitions of InfoSec suggested in different sources are summarised below (adopted from).[2]
1. "Preservation of confidentiality, integrity and availability of information. Note: In addition, other properties, such as authenticity, accountability, non-repudiation and reliability can also be involved." (ISO/IEC 27000:2009)[3]
2. "The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability." (CNSS, 2010)[4]
3. "Ensures that only authorized users (confidentiality) have access to accurate and complete information (integrity) when required (availability)." (ISACA, 2008)[5]
4. "Information Security is the process of protecting the intellectual property of an organisation." (Pipkin, 2000)[6]
5. "...information security is a risk management discipline, whose job is to manage the cost of information risk to the business." (McDermott and Geer, 2001)[7]
6. "A well-informed sense of assurance that information risks and controls are in balance." (Anderson, J., 2003)[8]
7. "Information security is the protection of information and minimises the risk of exposing information to unauthorised parties." (Venter and Eloff, 2003)[9]
8. "Information Security is a multidisciplinary area of study and professional activity which is concerned with the development and implementation of security mechanisms of all available types (technical, organisational, human-oriented and legal) in order to keep information in all its locations (within and outside the organisation’s perimeter) and, consequently, information systems, where information is created, processed, stored, transmitted and destroyed, free from threats.
Threats to information and information systems may be categorised and a corresponding security goal may be defined for each category of threats. A set of security goals, identified as a result of a threat analysis, should be revised periodically to ensure its adequacy and conformance with the evolving environment. The currently relevant set of security goals may include: confidentiality, integrity, availability, privacy, authenticity & trustworthiness, non-repudiation, accountability and auditability." (Cherdantseva and Hilton, 2013)[2]
An Information Technology (IT) Security Policy identifies the rules and procedures for all individuals accessing and using an organization's IT assets and resources. Effective IT Security Policy is a model of the organization’s culture, in which rules and procedures are driven from its employees' approach to their information and work. Thus, an effective IT security policy is a unique document for each organization, cultivated from its people’s perspectives on risk tolerance, how they see and value their information, and the resulting availability that they maintain of that information. For this reason, many companies will find a boilerplate IT security policy inappropriate due to its lack of consideration for how the organization’s people actually use and share information among themselves and to the public



Disaster recovery plan .


Disaster recovery of critical data and resumption of IT services is at the core of a full business continuity plan, but why is it needed?
Disasters can strike in many different ways including:
  • Natural Disasters
    • Tornadoes, fires, earthquakes
  • Terrorist Attacks
  • Computer Based Disasters
    • Viruses, worms, malicious code
    • Critical equipment failure

Having detailed plans and architectures in place to recover critical data from a disaster such as these is crucial for any organization. Their survival may well depend upon it. Let’s look at a few examples.
September 11, 2001 is one of most infamous dates in history of the United States. The attack on the World Trade Center buildings claimed thousands of lives, but their destruction also decimated thousands of businesses. Many of those businesses failed to survive due to the fact that they did not have adequate backups and plans to recover critical data. Some had backups, but they went to the other building and were not in remote geographic locations. There were a few companies that did survive without a backup, but lost tremendous amounts of time and money recreating data necessary to function.
In 2005, Hurricane Katrina slammed into the Deep South of the United States and wrecked infrastructure, homes, businesses, and left a region in turmoil for months. Communications were tremendously impacted as phones and networks were completely knocked out. Many local Points of Presence (POPs) and local data centers were damaged or under water. Some businesses were able to recover their data from remote backups, effectively move to a temporary location, and utilize alternative communication for those sites because they utilized detailed disaster recovery and business continuity plans.
While major catastrophes are not common, smaller scale disasters can still be decimating to an organization. One large computer virus that runs rampant in an IT organization can cripple functions and corrupt valuable data. Failures for redundancy in equipment and power can and do happen. The important thing to remember is that disasters may be minimized (viruses, attacks, equipment failures, etc), but they will happen. It is up to individual organizations to devise effective plan for recovery and test its effectiveness.

Creating a Disaster Recovery Plan for Your Needs
As disasters range in type, size, and scope, it is critical to recognize key areas to prevent or minimize impact on the IT infrastructure.
The first step to achieving a disaster recovery design is to effectively plan for what you need to protect. The IT plan should be an integral part of an organization’s overall business continuity plan and be aligned to the same goals for protecting crucial data, access, and continued function.
Some company’s plans may be very simple due to scope and size, but plans can become more complex with organizations containing multiple departments and/or locations. The true purpose should be to develop a plan to restore services and data as quickly as possible in event of a disaster.
There are several great planning books on disaster recovery and business continuity, but the steps below are common to many of the strategies for creating a recovery plan.

Step 1: Identify the Critical Needs
As an IT Security Professional involved in disaster recovery planning, development of a disaster recovery plan needs a list of requirements. The list of requirements should reflect the critical needs of the IT organization, how they impact the larger company, and will allow it to continue to function.
The key here is look at larger concepts first i.e., redundancy for power and equipment, backups of data, and potential alternative sites. Don’t drill down into the exact items yet, just look at the general.
Cost of a plan is also critically important and should not be overlooked in this process. A well detailed plan that can protect everything and keep the company up and running is great, but if it is too expensive and will bankrupt the organization, the plan will never be implemented.

Step 2: Identify Redundancy and Failover Potential
IT Security Professionals should work hand in hand with IT administrators to identify potential weakness in the organization and provide for redundancy and failover.
The focus here is to maintain access to key systems and data due to power and/or equipment failure. For power, redundant connections, Uninterruptable Power Supplies (UPS), and generators are the most common ways to ensure that systems can operate with power fluctuations or outages.
Equipment redundancy should involve redundant power supplies, management modules, RAID disk arrays with hot spare disks for failure, and redundant or alternative network connections. Whatever items are identified, remember that redundancy can be expensive and it is important to develop the plan to match your budget.

Step 3: Identify Backups and Backup Locations
Each organization will have different types of data and in varying amounts that need to be backed up. It is important to recognize types, since this will drive not only the amount of storage required, but also how regularly backups are needed.
Types of data typically backed up are:
  • Applications
  • User data files
  • Operating System backups
  • Financial Data
  • Emails
  • Audit Data
  • Databases
  • Transaction files
  • Website data
  • Organizational documents
  • Customer lists
Depending on cost, it may be needed to store copies of the data locally or at a remote facility.
Onsite storage could be on spinning disk arrays or in a tape library. In most common architectures, the data can be stored in a redundant file system or utilize an archive for backups. Archives can utilize Virtual Tape Libraries (VTLs) that utilizes spinning disk arrays and traditional tape libraries to store data long term. It is highly recommended that a backup of data is stored at a remote location and this is generally preferred to be a geographically remote location of more than 50 miles from the primary location. Natural disasters like earthquakes and hurricanes have a tendency to wreak havoc over larger areas.
Now that you have identified the data and where to store it, it is important to address the method of backup and thus its restoration. Backups can consist of a full backup, an incremental backup, and a differential backup. A full backup is a comprehensive and complete copy of all files on disk or file system at that point in time. Systems can be restored from a full backup alone, but this backup type is recommended on a less frequent basis (like once a week) due to its long process time and resource demand.
An incremental backup is a partial backup that only copies the information that has been changed since the last full or incremental backup. This backup is less intensive than a full backup and can be run more frequently after a full backup has been performed. If a restore is initiated, the last full back must be restored and then each incremental backup restored to ensure full recovery.
A differential backup is similar to an incremental backup, but it operates a little differently. This backup method stores files that have altered since the last full backup and makes copies of files that have not been altered since the last differential backup. When restoring, the administrator can use the last full backup and then the last differential backup.
Once the backup methods to be used have been identified, specific attention needs to be placed on the frequency and schedule. Some data backups will occur more frequently and the next section concerning equipment will be impacted on your decision.
Step 4: Backup Equipment
Specifying the correct and affordable equipment to provide access to your data is absolutely essential to run your organization, but almost of equal importance is the type of equipment to backup and store that data. As part of the plan and design of the architecture, tremendous forethought is needed to address performance, growth in the amount of data, and supportability of the backup applications.
Backups need to complete and not hinder production environments, network connections must be capable to move all backups internally and externally without bottlenecks, and the architecture needs to be flexible to growth as the amount of data increases.
Care must also be taken to choose servers and components that will work with the applications performing backups or managing the archives. As always with this process, be mindful of the budget involved.

Step 5: Investigate Alternate Sites
An option for an organization is to investigate an alternative site for that organization to temporarily function in the case of a disaster. Items to consider here are size and supportability for the organization and IT, network and phone connectivity, and cost.

Step 6: Test
Disaster recovery and business continuity plans are useless unless they are tested for effectiveness. IT Security Professionals should work with members of management to conduct disaster drills that test recovery of data, failovers, and access to remotely stored data.

The Need for Disaster Recovery
As you can see from these few steps, there is already a lot of detail required for disaster recovery plans. Detail is crucial to making sure that any disaster recovery plan is effective, can restore access to data, and allow an organization to continue to function.
There are issues here that we have not addressed concerning security of data at a remote location or in transit to the remote relocation, but those issues will be addressed in a later article.
Today, United State Federal Agencies require all departments to have clearly defined business continuity plans that include detailed disaster recovery architectures and restoration plans. Many organizations and companies around the world have also developed their own plans and several of those have seen the need to act on them to restore their data and maintain function.
Developing a disaster recovery solution can be expensive, but properly managed, the cost can be justified. The question that should be asked to members of management of an organization that doesn’t have a disaster recovery or business continuity plan is: ”
When disaster strikes, can we afford to start over?”

12 . Why are digital signatures and digital certificates important for electronic commerce?
Ans:
A digital signature or digital signature scheme is a mathematical scheme for demonstrating the authenticity of a digital message or document. A valid digital signature gives a recipient reason to believe that the message was created by a known sender, and that it was not altered in transit. Digital signatures are commonly used for software distribution, financial transactions, and in other cases where it is important to detect forgery or tampering.
Digital signatures are often used to implement electronic signatures, a broader term that refers to any electronic data that carries the intent of a signature,[1] but not all electronic signatures use digital signatures.[2][3][4] In some countries, including the United States, India, and members of the European Union, electronic signatures have legal significance. However, laws concerning electronic signatures do not always make clear whether they are digital cryptographic signatures in the sense used here, leaving the legal definition, and so their importance, somewhat confused.
Digital signatures employ a type of asymmetric cryptography. For messages sent through a nonsecure channel, a properly implemented digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects; properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes in the sense used here are cryptographically based, and must be implemented properly to be effective. Digital signatures can also provide non-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming their private key remains secret; further, some non-repudiation schemes offer a time stamp for the digital signature, so that even if the private key is exposed, the signature is valid nonetheless. Digitally signed messages may be anything representable as a bitstring: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol.

 Importance of digital signature .
·  speed: Businesses no longer have to wait for paper documents to be sent by courier. Contracts are easily written, completed, and signed by all concerned parties in a little amount of time no matter how far the parties are geographically.
·  Costs: Using postal or courier services for paper documents is much more expensive compared to using digital signatures on electronic documents.
·  Security: The use of digital signatures and electronic documents reduces risks of documents being intercepted, read, destroyed, or altered while in transit.
·  Authenticity: An electronic document signed with a digital signature can stand up in court just as well as any other signed paper document.
·  Tracking: A digitally signed document can easily be tracked and located in a short amount of time.
·  Non-Repudiation: Signing an electronic document digitally identifies you as the signatory and that cannot be later denied.
·  Imposter prevention: No one else can forge your digital signature or submit an electronic document falsely claiming it was signed by you.

·  Time-Stamp: By time-stamping your digital signatures, you will clearly know when the document was signed.


digital certificate or identity certificate) is an electronic document used to prove ownership of a public key. The certificate includes information about the key, information about its owner's identity, and the digital signature of an entity that has verified the certificate's contents are correct. If the signature is valid, and the person examining the certificate trusts the signer, then they know they can use that key to communicate with its owner.

 Importance of digital certificate :
What does security provide?

Identification / Authentication:
The persons / entities with whom we are communicating are really who they say they are.
Confidentiality:
The information within the message or transaction is kept confidential. It may only be read and understood by the intended sender and receiver.
Integrity:
The information within the message or transaction is not tampered accidentally or deliberately with en route without all parties involved being aware of the tampering.
Non-Repudiation:
The sender cannot deny sending the message or transaction, and the receiver cannot deny receiving it.
Access Control:
Access to the protected information is only realized by the intended person or entity.
All the above security properties can be achieved and implemented through the use of Public Key Infrastructure (in particular Digital Certificates).
13. What are the sources of computer virus ?   How to prevent it ?
Ans:  Sources are given below:
Downloadable Programs
One of the possible sources of virus attacks is downloadable programs from the web. Unreliable sources and internet newsgroups are one of the main sources of computer virus attacks. Downloadable files are one of the best possible sources of virus. Any type of executable program including games, freeware, screen savers as well as executable files are one of the major sources of computer virus attacks. Executable files having an extension of “.com”, “.exe” and “coolgame.exe” contain virus sources too. If in the case you want to download programs from the internet then it is necessary to scan every program before downloading them.
Cracked Software
Cracked Software proves to be yet another source of virus attacks. Most people who download cracked and illegal versions of software online are unaware about the reality that they may contain virus sources as well. Such cracked forms of illegal files contain viruses and bugs that are difficult to detect as well as to remove. Hence, it is always a preferable option to download software from the appropriate source.
Email Attachments
Email attachments are one of the other popular sources of computer virus attacks. Hence, you must handle email attachments with extreme care, especially if the email comes from an unknown sender. Installation of a good antivirus assumes prime necessity if one desires to eliminate the possibility of virus attacks. It is necessary to scan the email even if it comes from a friend. There exists a possibility that the friend may have unknowingly forwarded virus along with the email attachment.
Internet-Best Possible Source of Viruses
There can be no denying the fact that internet is one of the common sources of virus infection. This fact is not a real surprise and there is no point to stop accessing internet henceforth. Majority of all computer users are unaware as when viruses attack computer systems. Almost every computer user click/download everything that comes their way and hence unknowingly invites the possibility of virus attacks.
Booting from Unknown CD
One of the other sources of virus attacks is perhaps through an unknown CD. Most computer users believe that one of the most common ways of virus infection is through Data CD. It is a good practice to remove the CD when the computer system is not working. If you do not remove the CD after switching off the computer system then it is every possibility that the computer system may start to boot automatically from the disc.
This may enhance the possibility to install as well as launch files/programs on a specific computer system. Apart from the above-mentioned sources, file sharing network like Bearshare, Kazaa and Limewire are possible sources of virus attacks too. Hence, it is necessary to delete the downloaded files from the above-mentioned file sharing networks to eliminate possibility of virus infection.

Prevention from Computer virus :
1: Install quality antivirus

Many computer users believe free antivirus applications, such as those included with an Internet service provider's bundled service offering, are sufficient to protect a computer from virus or spyware infection. However, such free anti-malware programs typically don't provide adequate protection from the ever-growing list of threats.
Instead, all Windows users should install professional, business-grade antivirus software on their PCs. Pro-grade antivirus programs update more frequently throughout the day (thereby providing timely protection against fast-emerging vulnerabilities), protect against a wider range of threats (such as rootkits), and enable additional protective features (such as custom scans).
Download as pdf

2: Install real-time anti-spyware protection

Many computer users mistakenly believe that a single antivirus program with integrated spyware protection provides sufficient safeguards from adware and spyware. Others think free anti-spyware applications, combined with an antivirus utility, deliver capable protection from the skyrocketing number of spyware threats.
Unfortunately, that's just not the case. Most free anti-spyware programs do not provide real-time, or active, protection from adware, Trojan, and other spyware infections. While many free programs can detect spyware threats once they've infected a system, typically professional (or fully paid and licensed) anti-spyware programs are required to prevent infections and fully remove those infections already present.

3: Keep anti-malware applications current

Antivirus and anti-spyware programs require regular signature and database updates. Without these critical updates, anti-malware programs are unable to protect PCs from the latest threats.
In early 2009, antivirus provider AVG released statistics revealing that a lot of serious computer threats are secretive and fast-moving. Many of these infections are short-lived, but they're estimated to infect as many as 100,000 to 300,000 new Web sites a day.
Computer users must keep their antivirus and anti-spyware applications up to date. All Windows users must take measures to prevent license expiration, thereby ensuring that their anti-malware programs stay current and continue providing protection against the most recent threats. Those threats now spread with alarming speed, thanks to the popularity of such social media sites as Twitter, Facebook, and My Space.

4: Perform daily scans

Occasionally, virus and spyware threats escape a system's active protective engines and infect a system. The sheer number and volume of potential and new threats make it inevitable that particularly inventive infections will outsmart security software. In other cases, users may inadvertently instruct anti-malware software to allow a virus or spyware program to run.
Regardless of the infection source, enabling complete, daily scans of a system's entire hard drive adds another layer of protection. These daily scans can be invaluable in detecting, isolating, and removing infections that initially escape security software's attention.

5: Disable autorun

Many viruses work by attaching themselves to a drive and automatically installing themselves on any other media connected to the system. As a result, connecting any network drives, external hard disks, or even thumb drives to a system can result in the automatic propagation of such threats.
Computer users can disable the Windows autorun feature by following Microsoft's recommendations, which differ by operating system. Microsoft Knowledge Base articles 967715 and 967940 are frequently referenced for this purpose.
Download as pdf

6: Disable image previews in Outlook

Simply receiving an infected Outlook e-mail message, one in which graphics code is used to enable the virus' execution, can result in a virus infection. Prevent against automatic infection by disabling image previews in Outlook.
By default, newer versions of Microsoft Outlook do not automatically display images. But if you or another user has changed the default security settings, you can switch them back (using Outlook 2007) by going to Tools | Trust Center, highlighting the Automatic Download option, and selecting Don't Download Pictures Automatically In HTML E-Mail Messages Or RSS.

7: Don't click on email links or attachments

It's a mantra most every Windows user has heard repeatedly: Don't click on email links or attachments. Yet users frequently fail to heed the warning.
Whether distracted, trustful of friends or colleagues they know, or simply fooled by a crafty email message, many users forget to be wary of links and attachments included within email messages, regardless of the source. Simply clicking on an email link or attachment can, within minutes, corrupt Windows, infect other machines, and destroy critical data.
Users should never click on email attachments without at least first scanning them for viruses using a business-class anti-malware application. As for clicking on links, users should access Web sites by opening a browser and manually navigating to the sites in question.

8: Surf smart

Many business-class anti-malware applications include browser plug-ins that help protect against drive-by infections, phishing attacks (in which pages purport to serve one function when in fact they try to steal personal, financial, or other sensitive information), and similar exploits. Still others provide "link protection," in which Web links are checked against databases of known-bad pages.
Whenever possible, these preventive features should be deployed and enabled. Unless the plug-ins interfere with normal Web browsing, users should leave them enabled. The same is true for automatic pop-up blockers, such as are included in Internet Explorer 8, Google's toolbar, and other popular browser toolbars.
Regardless, users should never enter user account, personal, financial, or other sensitive information on any Web page at which they haven't manually arrived. They should instead open a Web browser, enter the address of the page they need to reach, and enter their information that way, instead of clicking on a hyperlink and assuming the link has directed them to the proper URL. Hyperlinks contained within an e-mail message often redirect users to fraudulent, fake, or unauthorized Web sites. By entering Web addresses manually, users can help ensure that they arrive at the actual page they intend.
But even manual entry isn't foolproof. Hence the justification for step 10: Deploy DNS protection. More on that in a moment.

9: Use a hardware-based firewall

Technology professionals and others argue the benefits of software- versus hardware-based firewalls. Often, users encounter trouble trying to share printers, access network resources, and perform other tasks when deploying third-party software-based firewalls. As a result, I've seen many cases where firewalls have simply been disabled altogether.
But a reliable firewall is indispensable, as it protects computers from a wide variety of exploits, malicious network traffic, viruses, worms, and other vulnerabilities. Unfortunately, by itself, the software-based firewall included with Windows isn't sufficient to protect systems from the myriad robotic attacks affecting all Internet-connected systems. For this reason, all PCs connected to the Internet should be secured behind a capable hardware-based firewall.

10: Deploy DNS protection

Internet access introduces a wide variety of security risks. Among the most disconcerting may be drive-by infections, in which users only need to visit a compromised Web page to infect their own PCs (and potentially begin infecting those of customers, colleagues, and other staff).
Another worry is Web sites that distribute infected programs, applications, and Trojan files. Still another threat exists in the form of poisoned DNS attacks, whereby a compromised DNS server directs you to an unauthorized Web server. These compromised DNS servers are typically your ISP's systems, which usually translate friendly URLs such as yahoo.com to numeric IP addresses like 69.147.114.224.
Users can protect themselves from all these threats by changing the way their computers process DNS services. While a computer professional may be required to implement the switch, OpenDNS offers free DNS services to protect users against common phishing, spyware, and other Web-based hazards.
 


2 comments:

  1. Nicz.I need some tips related to your content..I am working in Cloud Erp Software Companies In India If You need any more information kindly make me call to this number 044-6565 6523.

    ReplyDelete
  2. Hey, Wow Provided Post will be Very much Informative to the People Who Visit this Site. Good Work! Thank You for Sharing.
    Construction erp software in chennai

    ReplyDelete