Quality Analysis Online Training
Get practical exposure in analyzing the application product/service from roots to the advanced level by real-time industry experts with practical uses at Quality Analysis Online Training and become a professional quality analyst.
KITS Online Training Institute Provides Best Quality Analysis QA Training by our highly professional certified trainers. Testing tools is designed for who are interested in building their career towards testing side. You should have knowledge on software development life cycle prior learning this course. Here you can know about testing methodologies, testing levels, types of testing and many more. We are delighted to be one of the best leading IT online training with best experienced IT professionals and skilled resources. We have been offering courses to consultants, companies so that they can meet all the challenges in their respective technologies. we also provide similar courses like Selenium online Training.
Importance of software systems
Common problems in software development and Software Bugs
Testing Objectives
What is Manual and Automation Testing?
Tester Roles and Responsibilities
Is testing really important?
Why choosing Testing as career?
Review, Walk through, KT and Kick off – Static Testing
Different Components in software environments
Difference between Development(Local), Test and Production Environments
Web applications, Windows based applications and Intranet applications
Differences between N Tier, Two tier etc
Unit Testing, Integration Testing
System Testing Techniques
Usability Testing, Functional Testing and Non Functional testing
Boundary Value Analysis
Equivalence Class Partition
Error guessing, Negative testing
Back End testing
Database Testing
Compatibility Testing
Security testing
Portability testing
Configuration Testing
Recovery Testing
Load Testing
Stress Testing
Scalability Testing
Soak Testing
Volume Testing
Test Case Design Templates
Types of Test Cases and Main qualities of Test cases
Test Case Design Reviews
Requirement Traceability Matrix
Test Data Setup
Importance of Test data in Testing
Approach for gathering Test Data
Benefits of Test data Gathering
Managing Test data and creating Data Repositories
Difference between Bug and Defect.
Format of Bug.
Priority and Severity
Different status of bug in Bug life cycle.
Bug Reporting tools JIRA/Bugzilla/Quality Center
Different levels of Test Execution
Sanity/ Smoke testing(Level 0)
Test Batches or Test suite Preparation and Execution(Level 1)
Retesting(Level 2)
Regression Testing(Level 3)
Bug Leakage
Test Design
Contents of test plan
Master test plan and Testing level test plan
Entry and Exit criteria
Test Coverage
Test Responsibilities
Adhoc testing, Exploratory Testing
General risks in test environment
Test cases sign off
Retesting
Regression Testing
UAT testing
Alpha and beta testing
Monkey testing
Incremental Model
Prototype Model
Spiral Model
V Model
Agile method
What is Automation Testing
Benefits of Automation Testing
Manual Testing Vs Automation Testing
Various Automation Test Tools
Installing Java
Installing Eclipse
Features of Java
Why Java for Selenium
First Eclipse Project
First Java program
Concept of class file
Platform independence
Data types in Java
String class
If statements
Conditional and concatenation operators
While Loop
For Loops
Practical Examples with loops
Usage of loops in Selenium
Single Dimensional Arrays
Two Dimensional arrays
Practical usage of arrays in Selenium
Object Class
Drawbacks of arrays
What are Functions?
Function Input Parameters
Function Return Types
Relevance of Packages
Creating Packages
Accessing Classes Across Packages
Accessing modifiers -Public, Private, Default, Protected
Exception handing with try catch block
Importance of exception handling
Exception and Error
Throwable Class
Final and Finally
Throw and Throws
Different Types of Exceptions
NgInstalling TestNg in Eclipse
TestNg annotations
Understanding usage of annotations
Running a Test in TestNg Batch
Running of tests in TestNg
Skipping Tests
parameterizing Tests – DataProviderAssertions/Reporting Errors
TestNg Reports
POM
Why do we use web service?
What is XML? Why is XML used for communication?
Famous protocols used in web services
What is WSDL?
How SOAP UI helps usJava OR groovy?
SOAP UI free version
SOAP UI Java API Protocols supported by SoapUI
How one should use soap UI Download and install SOAP UI
Future of web services
Compare with Other Open Source Tools
Using Virtual User Generator
How to record a script in Vugen
Using Parameterization
How to perform Parameterization
Introduction to Load Runner Controller
How to design a scenario
What are monitors
How to configure a monitor
Using Load Generator
How to configure Load Generators
Using Ramp-up
Using Ramp-down
Executing a Scenario
Exercise and Assignments
How to perform Auto Correlation
How to perform Manual Correlation
Using dynamic value
Using Load Runner Analysis
How to create a Professional Report in Load Runner Analysis
Using Diagnostics
Using Performance Center
Exercise and Assignments
Introduction to Database
MySQL Database
Comparison with Popular Databases – Oracle, MS SQL Server, IBM DB2
Structured Query Language (SQL)
Data Definition Language (DDL)
Data Manipulation Language (DML)
Introduction to Tables, Rows, Columns
What are a Foreign Key, Primary Key and Unique Key
What are DDL and DML
(DML) Select, Update, Delete and Insert Into statements
(DDL) Create, Alter, Drop statements
Exercise and Assignments
Requirements Module
Creating traceability between requirements and Tests
Test Plan Module
How to create Manual Test Cases
Test Lab Module
How to create Test Sets
Linking Test cases to Test Sets
Executing Test Cases
Creating Defects during Execution
Defects Module
How to create a defect
Reports with Document Generator
Most popular Test Reporting Tools
Jira – Test Reporting Tool for Agile Software Development and Testing
Bugzilla – Test Reporting Tool for Agile Software Development and Testing
Backlog Tracking with Bugzilla
Issues Management with Bugzilla
Bug Reporting and Tracking with Bugzilla
Self-Paced
Learn when and where it's convenient for you.Utilise the course's practical exposure through high-quality videos.Real-Time Instructors Will Guide You Through The Course From Basic to Advanced Levels
Online
Receive A Live Demonstration Of Each Subject From Our Skilled Faculty Obtain LMS Access Following Course Completion Acquire Materials for Certification
Corporate
The Class Mode Of Training, Or Attend An Online Training Lecture At Your Facility From A Subject Matter Expert With discussions, exercises, and real-world use cases, learn for a full day.Create Your Curriculum Using the Project Requirements
Click here to Login to add a review.
The support team was very punctual in sending the recordings. All these recordings enhanced my skills in Quality Analysis and helped me to clear the certification
- Amanuel T
The practical scenarios taken during the course enhanced my skills in Quality Analysis and make me a master in Quality Analysis. Thanks for providing the best training
- NAMIMOAK S
I recommend KITS is the best place to analyze and test the data by interacting with the real-time data and become a master in Quality Analysis
- Luckson M
It is the best place to enhance skills in Quality Analysis and become a master in analyzing the data. Thank you for providing the best training
- Rupak Kumar Naik
Interacting with real-time data during the course enhanced my skills in Quality Analysis. Thanks for providing the best training and making me aster in Quality Analysis
- Muhammad Talha Majoka
100% Online Course
Flexible Schedule
Beginner Level To Advance Level
Real-Time Scenarios With Projects
LMS Access
Interview Questions & Resume Guidelines Access
Related

Ansible Training
Start increasing your team productivity and business outcomes through the KITS ansible online training course. This course will help you in developing the skills and knowledge required to automate the

Check Point Firewall Online Training
Get the best knowledge on creating the check point at various places of your network from beginner to the advanced level taught by real time working professionals at KITS Check Point Firewall Online

Chef Training
Get hands-on experience of configuration management technology to automate the infrastructure provisioning through hands-on exercises and projects by live experts with practical use cases at the chef

Django Online Training
KITS Django’s Online training Course, helps you to gain expertise in Django’s framework various concepts like Models, Ajax, JQuery, and so on by real-time experts. Besides you will master in Djang

Hibernate Online Training
Get practical Knowledge of configuring and deploying the application on hibernate platform conducted by KITS real-time experts with practical use cases as per the latest syllabus and become a master i

IBM Integration Bus Online Training
Get the strong foundation of creation, development, and deployment of message flow application using IBM integration Bus. Through this course, you will get hands-on experience of various messaging top

IBM Message Broker Online Training
Start IBM Message Broker Online Training course today and enhance your skills in developing, debugging, testing, and deploying the message models using the IBM Web Sphere tool by real-time industry ex

IBM Message Queue Online Training
Get the best knowledge on IBM Websspehere taught by real-time experts in the IT Industry. This training will help you to master all the levels of IBM Websphere Message queue from the basics to advance

Kubernetes Online Training
KITS Kubernetes Online Training Course, give you the practical knowledge on Kubernetes architecture, components, app development of Kubernetes cluster with live usecases through real-time experts.

Magento Online Training
Learn from the basics to the advanced level of content management systems(CMS) based on PHP and MySQL for web hosting. This Course provides the full support for Object-oriented programming as a part o

Micro services Online Training
Acquire practical exposure to architectural design principles and tools to manage and implement microservice-based applications through hands-on exercises and projects. This course will help to clea

Puppet Online Training
Start your career in automating the complex networks and IT environments using puppet code through hands-on exercises and projects by live experts with practical use cases at kits Puppet Online Traini

Python Django Online Training

Quality Analysis Online Training
Get practical exposure in analyzing the application product/service from roots to the advanced level by real-time industry experts with practical uses at Quality Analysis Online Training and become a

Quality Stage Online Training
Acquire hands-on experience through real-time projects in Data Stage, ETL, data warehousing, and data working rest as well as in motion by live industry experts with practical use-cases at Quality Sta

Ruby on Rails Online Training
Get the best knowledge on designing and developing the front end and back end website solutions using Ruby on rails. Kits Ruby On rails online training course provides in-depth knowledge in the core f

Scala Online Training
Get the essentials of Scala programming from the roots taught by live industry experts with practical use cases at Scala Online Training Course. This course will enhance your knowledge of various aspe

Snaplogic Online Training
Start today to enhance your knowledge on learning different concepts of data integrations, cloud platforms easily and effectively through the real-world examples at kits Snap logic Online Training Cou

Spark Online Training
Kits Spark online training course lets you master the real-time data processing using streaming, SQL, RDD, machine learning and gain hands-on Scala programming through real-time exercises and projects

Struts Online Training
Get real-time knowledge on Model view and Controller architecture and structs framework and enhance your knowledge on application development through KITS Struts Online Training Course

Web Methods Online Training
KITS web methods training help you in mastering architecture, integration tools, components, advanced web services by live industry experts with live use cases. This course improves your skills and pr

WISE Package Studio Online Training
Acquire hands-on experience of a software management solution to support the need for application integration teams and become a master in software management with practical use-case through Wise Pack

What is Angular JS?
The website is the basic need for marketers to reach a mass number of people. With the high availability of content management systems, website development has become a cakewalk in the IT World. But we cannot except the best website unless you develop using best platform. So what does it mean the best website? A website is considered as the best website if it is user-friendly ( Compatible to all devices and loads in an optimal time across various platforms). So here developing a website is an ordinary thing. But developing a user-friendly application is a challenging thing that is essential in today's world. So how to develop an intuitive web application. Which platform suits best in developing an intuitive application? Then without the second thought most of the developers vote for AngularJS. So, But before know about AngularJs, let us have an initial look at What is a Framework? A framework is a collection of code libraries where some functions were predefined. By the utilization of this framework, developers can easily develop lightweight applications. Moreover, the developer can concentrate on actual logic rather than struggling with the code dependencies. In simple words, the utilization of these predefined codes makes the website development in a short period. Hence now, let's move into the actual concept What is Angular JS? Angular JS is an open-source web application framework developed in 2009. This was developed by Misko Hervy and Adam Abrons. This framework was now maintained by Google. This framework architecture depends on the Model View Controller (MVC) framework that is similar to the Java Script Framework. Moreover, this framework suits best in developing single-page applications. It is a continuously growing and expanding framework in providing better ways of application development. Moreover, this platform is capable of changing from static HTML to Dynamic HTML. Besides, it provides features like dynamic building, dependency injection, and also the code rewriting. Moreover, Angular JS is different from Angular framework. And AngularJS is also capable of extending HTML attributes with directives. Since, we people have got an basic idea regarding AngularJS, let us have a look at its architecture. Get practical exposure on AngularJS with practical use cases at AngularJs Online Training AngularJS Architecture: As mentioned above, Angular framework works on MVC architecture. So let us have a look at its architecture. MVC Architecture: An architecture is basically a pattern used to develop an application. This Angular JS Architecture usually consists of three components. They are : Model: It is responsible to manage the application data. It responds to the instruction from the view and the instructions from the control to update it self. View: It is responsible for displaying the application data. It also specifies the data in a triggered format by the controller to present the data. Since its an script based template such as JSP, ASP, PHP, it is easy to integrate with the AJAX technology. Controller: This component is responsible for connecting the model and the view component.It responds to the user inputs and perform data interactions on the data model objects. This architecture is very poplar, because. it isolates the application logic from the user interface and supports separation of concerns. Besides, whenever we talk about the MVC architecture, we have to split our application into three components and then write the code to connect them. But when comes to angular JS, all we need to is to split the application into MVC and the framework takes care of the rest. Hence this framework is capable of saving the lot of time and allows to finish the job with less code. Whenever the controller receives the input, it validates the input and performs the business operations that modify the data model state. What are the AngularJS Components? AngularJS consist of several components. Let us discuss some of them. They are: a)Data Binding: Data binding in AngularJs is a two-way process i.e the view layer of MVC architecture is the exact copy of the Model layer. Hence, there is no necessity to write the special code to bind the data to the HTML Controls. Usually in MVC architectures, we need to continuously update the view layer and model layer to remain in sync with each other. In AngularJs we can say that model and view layer synchronized each other. So when ever the data in the model changes, the view layer reflects the change and vice-versa. All this happens in angular is immediately and automatically. It makes sure that model and view is updated all the times. b)Templates: One of the major use of this application framework is make use of templates. Usually what happens in AngularJs, is templates were passed by browser into DOM. Here the DOM becomes the input of the Angular JS Compiler. The Angular JS then travels the DOM templates for rendering the instructions called Directives. The other siblings of Angular JS Work differently. These make use of HTML String, where as AngularJS does not manipulate the template strings. With DOM, we have the privilege to extend directive vocabulary (or) even abstract them into reusable components. c)Dependency Injection: It is a software design pattern work on Inversion of the Control. Here the term Inversion control refers to the objects that does not create the other objects. Hence , they used to get the objects from an external source. Moreover, the primary object is not capable of dependent object, so an external source creates a dependent object and gives to the source object for further usage. Hence on the basis of dependency injection, we can create all the information database and can get into the model class. In AngularJS, dependencies were injected using “injectable factory method”(or) constructor function. d) Scope: It is an built-in object in AngularJs which contains application data and models. Here the $scope is responsible to transfer the data from controller to the view and vice-versa. Besides, we can create properties to the $scope object inside the controller function and assign a value to it. e)Controller: A controller is a java script constructor function that contains the attributes/properties and functions in AngularJS that is responsible to increase the Angular JS Scope. And each controller accepts the scope as a parameter that refers to the application that needs to handle. Hence likewise, there are many other components of AngularJs. You people can acquire practical knowledge of AngularJS components by live experts with practical use cases through AngularJS Online Course. Final Words: With this, I hope you people have got an basic overview of Angular and its components. In the upcoming post of this blog, Ill be sharing you the details of working of various Angular JS components in details and application in IT industry. Meanwhile Have a glance at our AngularJS Interview Questions and crack the interview.
Continue reading
What is Ansible?
Since the utilization of computing resources was increasing exponentially doing multiple tasks parallelly has become difficult for humans. And the companies today were not in enough position to hire the human according to the need. Hence the companies started thinking about the solution to this problem. Then they thought that automation is the best option to get rid of these kinds of problems. This automation suits best various areas of the IT industry. One such area is server Management. Today Server management has become a tedious task for system administrators. Hence these people opt for the server management tool like Ansible. Today most of the SysOps people opt for this tool for server management. Do you know, what are the exciting features of this tool? If NO, read the article to get answers to these questions. Without wasting much time, let us jump into the actual topic What is Ansible? It is a simple software tool in providing powerful automation for cross-platform customer support. This tool is primarily used for application development, updating the workstations and servers, cloud provisioning, configuration management, intra-service workstation, and so on. In simple words, this system is capable of doing all the activities on a weekly (or) a monthly basis. Moreover, this automation tool does not depend on agent software and does not have security issues. Hence the user can easily deploy the application on this platform. Besides the automation, this platform requires instruction to accomplish each job. Since everything is written in a script format, it is easy to do version control. Here the principal result of this is the major contribution to “Infrastructure as a Code”. Here the idea of the maintenance of server and client infrastructure should be treated as the same as software development. Since the ansible can be used for front end automation, system administration, and DevOps Engineers, this platform is useful for all users. Besides this platform not only allows you to configure one computer but allows you to configure a network of computers at once. Moreover, this platform does not require a prior programming language. All the instructions written here were in a simple human-readable format. And also, this platform is capable of reading all types of files. Automation Simplifies the most complex task. It not only makes the developer's job manageable but also allows them to focus attention on the other tasks. Besides, it frees up time and increases efficiency. According to the recent stats, this tool stood at the top of the automation tools. Get more information on Ansible taught by industry experts at Ansible Online Training How Ansible Works? Ansible is designed for Single tier applications since day one. It is responsible for modeling the IT infrastructure rather than managing one system at a time. It does not use any agents and custom security infrastructure. Hence it is easier to deploy and manage. Besides, it uses a simple language called YAML to describe the automation of jobs. This tool consists of two categories of computers called the control node as well as the managed node. Here the control node is a computer that runs ansible. And there should be at least one control node even the backup control node exist. The managed node is managed by the control node. Here the ansible works by connecting the nodes on the network and then sending a small program called ansible module to that node. It executes these modules over ssh and removes them when finished. Here, the main criteria is Ansible control node has login access to the managed nodes. Here the SSH Keys are the most common way to provide access, but other forms of authentications were also supported. What does Ansible do? The term Ansible sounds to be complex, but most of the complexity is handled by the ansible but not the end-user. An ansible module is written to be the desired model of the system. i.e each module defines the true state of the system. Here maintaining the infrastructure is about checking the versions of the software. Hence when people talk about using the ansible, they were internally referring to the utilization of Ansible modules. because those parts of ansible perform specific tasks. And these Ansible modules were responsible for automating something across several computers. And it allows the programmers to write custom modules to perform the specialized task. What are the advantages of Ansible? Ansible tools have the following advantages: Simple: It uses a simple syntax written in YAML call playbooks.YAML is a human-readable data serialization language. Even non-programmers can use this to understand what is happening. Independent: The user does not need to install the specific software on the client/ host system which the user needs to automate. Moreover, it does not require to set up a management infrastructure that includes managing your system, network as well as storage. Powerful and Flexible: It has powerful features that enable you to model the most complex workflows. The ansible batteries can manage the infrastructure, network, operating system as well as services. Efficient: Since the Ansible module works via JSON it is extensible with the modules written in programming that you were aware of. It introduces the modules as the basic building blocks for your software. Likewise, there are many uses of Ansible when you work practically with this tool. By reaching the end of this blog, I hope you people have acquired enough knowledge regarding ansible and its application real-time project. You people can get hands-on experience on Ansible from beginner to the advanced level at Ansible Online Course
Continue reading
What is Chef?
The chef is a tool used for configuration management and is closely competing with the puppet. In this article I’m going you share the complete details regarding why it is used, where it is applied, and its advantages in IT World. So lets us start our discussion with, Why Chef? The software keeps on updating over time. So to utilize the various new features of any software, we need to update the software version. And doing is a simple task if it is on one system. But if it a case of updating on multiple systems (say in any organization) it becomes tedious and time-consuming. So to get rid of these repeated things, we need a tool like a chef to automate the configuration management. What is chef? The Chef is an automation tool that provides the way to define the Infrastructure as a Code(IAC). Infrastructure as a code refers to managing the infrastructure by writing the code rather than the manual process. Some people refer to this as a programmable Infrastructure. It uses pure Ruby, a domain-specific language for writing the system configurations. DevOps Chef is capable of spinning hundreds of instances in less than a minute. It enables programmers and system administrators to work together instead of waiting for the developers to write the code and for the ops team to deploy them. This configuration management tool serves the process of both developments as well as the ops team. Chef translates the system administration tasks to reusable definitions known as cookbooks and recipes. In these recipes, the authors can define the system's desired state by writing the code configuration. Then chef process that code along with data about the specific node where the code is running and ensures that the desired state matches the nature of the system. Are you looking to acquire practical knowledge on the chef, then visits chef online training Irrespective of the infrastructure size, the chef can perform the following automation: Infrastructure configuration Application deployment Configurations across the network Like a puppet, the chef uses a client-server architecture. Besides, it also contains an extra component called a workstation. Through Chef we can easily configure the pull as well as the push configuration. Pull Configuration: In these types of configuration management, the nodes poll a centralized server periodically for updates. Since the nodes were dynamically configured, pulling will be happening from the centralized server. Pull Configuration: In this type of configuration management, the centralized server pushes all configurations to the nodes. Here the centralized server uses some commands to configure the nodes. Chef Architecture: The Chef architecture has divided into 3 components: Work Station: The workstation is the simplest term in the admin system. This work station makes the possible interaction with chef-server as well as the chef-nodes. It is the place where all the cookbooks were created as well as tested. At workstation Cookbook deployments take place. And we can utilize this work station to download the cookbooks created by the other users. While interacting with the chef, we also need to understand the following terms: Development Kit: It contains all the packages that are required to use chef. Chef- Repo: It is a directory of the workstation, where all the cookbooks were present and maintained Knife: This command enables the work station to communicate the content of its repo-directory Test Kitchen: It provides the development environment to the work station which enables to create and test workbooks before they are distributed. 2.Chef Server: It is the center of work stations and nodes. This contains all the cookbooks, recipes as well as metadata. The workstations send a cookbook to the server using the knife. And the nodes communicate with the server using the chef-client. If any changes are made to the infrastructure as a code, then they must be passed to the chef server to apply to all the nodes. 3.Nodes: These refer to the machines that are managed (or) configured by the chef-server which may be virtual servers (or) the network devices (or) any other storage devices. Chef client makes the node to stay up to date and runs individually to configure them What are the silent features of chef? chef tool helps in speeding up the deployment process and software delivery. Being a DevOps tool it helps in streamlining the configuration task and managing the company’s server. The following were the silent features of Chef: We can manage a large number of servers with fewer employees It allows continuity in the deployment process from building to the testing until the end. Chef can be managed using different operating systems like Linux, Windows It can be integrated with several major cloud service providers. It also helps in managing risk at all stages of deployment. What are the advantages of Chef? Utilization of the chef has the following advantages: Accelerating Software delivery: When your infrastructure is automated with all the software requirements like creation and testing of new environments, software deployments becomes faster. Increased Service Resiliency: With the automated infrastructure, we can monitor for bugs as well as errors before they occur. Besides, it can also recover the errors more quickly Risk Management: It lowers the risk and improves compliance at all stages. Moreover, it reduces the conflicts during the development as well as the production environment Cloud Adoption: chef can be easily adapted to the cloud environment. Here the servers, as well as the infrastructure, can be easily configured, installed, and managed by the chef. Managing Data Centers and Cloud Environments: The Chef platform is capable of running under different environments. And it is capable of running all the cloud and on-premise platforms including servers. Streamlined IT Operation as well as the workflow: It provides a pipeline for continuous deployment from building to the testing. What are the disadvantages of chef? Utilization of the chef has the following advantages as follows: Learning chef requires the steep learning curve The initial set up is quite complicated It lacks push, so it does not have immediate action on change. And the changes will affect as per the schedule. Final Words: Everything in this world has some drawbacks, so keep the cons aside and make use of chef advantages for the efficient running of your project. By reaching the end of this blog, I hope you people have gained some stuff regarding chefs regarding the need utilization in the IT industry. You people can get practical knowledge on chef configuration management at Chef Online Course
Continue reading
What is Dot Net?
Application development has become more common in today's world. This is because today people can easily develop applications using different frameworks. A framework is essential for the smooth running of the application. It makes the application development simpler and faster. There are many frameworks like Dot Net for the smooth running of the applications. Even though there are multiple frameworks like Dot Net for developing intuitive applications, the importance of this framework has not yet decreased in the market. And have you ever taught of “why dot net has become more unique in the market”? Read the complete article to know the details What is Dot Net? Dot Net framework is a Microsoft software development framework. This framework is responsible for creating applications that run on the Windows platform. The initial version of this framework is released in 2002. This framework suits best for form-based as well as web-based applications. This framework supports various languages like Visual Basic and C #. Hence the developers can choose and select the language to develop the application. Dot Net is central to Microsoft's over-arching development strategy. It is so central to the development of the windows platform. This framework contains a large number of class libraries known as the Framework class library. The software programs that are written in an execution environment are known as Common Language Runtime(CLR). This programming model provides comprehensive software infrastructure and various services that are necessary to build up robust applications for PC as well as mobile devices. Get more features of .Net from live experts at Dot Net Online Training Asp .Net: ASP .Net is a part of the Microsoft .Net platform.ASP. Net framework works on the top of the HTTP protocol and uses the HTTP commands as well as policies to set the browser to server bilateral operations. These applications are the compiled codes written using the extensible as well as reusable components present in the Dot Net framework.ASP .Net is responsible to produce interactive, data-driven applications over the internet. It contains a large number of controls such as text boxes, buttons, and labels for assembling, configuring, and manipulating the code to create HTML pages. This ASP .Net consists of two models. They are: a)Web Forms Model: This model extends the event-driven model of interaction to web applications. Here the browser submits the web form to the web server and the server returns the full mark up page (or) the HTML page in response. Here all the client-side user activities are forwarded to the server for stateful processing. The server processes the output of client actions and triggers the reactions. Since the HTTP is a stateless protocol, ASP .Net framework helps in storing the information regardless of the application state. It consists of page state and session state. The page state is defined as the state of the client. The session state is defined as the collective information that is obtained from various pages the user visited and worked with the overall session state. b)ASP Dot Net Component Model: This ASP .Net Model provides various building blocks of ASP .Net pages. It is an object-oriented model that describes the serverside counterparts of all the HTML elements (or) tags. Server Controls that help in developing complex user-interface. The Dot Net framework is made up of object-oriented hierarchy. Usually, an ASP .Net web application is a combination of multiple web pages. When the user requests an ASP .Net page, the IIS delegates the page processing to the ASP .Net system. The ASP .Net runtime transforms the aspx page into the instance of the class that inherits from the base class of the Dot Net framework Dot Net Framework Components: This framework is responsible for various services like memory management, networking, type safety. This dot net framework mainly consists of 4 components: a)Common Language Runtime(CLR): It is a program execution engine that loads and executes the program. It is responsible for converting the program into the native code. Besides, It acts as an interface between framework as well as operating systems. It does various activities like exception handling, memory management, and garbage collection. Besides, it provides type safety, interoperability, and portability. b)Frame Class library(FCL): It is a standard library that is a collection of classes that is responsible to build an application. The Base Class Library(BCL) is a core of FCL and provides the basic functionalities c)Core languages: Dot Net support various Core languages. Some of them were: 1)WinForms: It is a smart client technology for the Dot Net framework. It is a set of managed libraries that simplify the common application task such as reading (or) writing the file systems 2)ASP .Net: It is a web framework designed as well as developed by Microsoft. It is responsible for developing websites, web applications as well as web services. Besides, it provides a fantastic integration of HTML, CSS as well as JavaScript d)Other modules: 1)LINQ: It is a query language introduced in Dot Net 3.5. With this Query language, user can query for data sources with C# (or) the visual basic programming languages. 2)Parallel LINQ: It is a parallel implementation of LINQ to the objects. It combines the simplicity as well as the readability of LINQ and provides the power of parallel computing. Moreover, it is capable of improving and provides the fast speed to execute the LINQ query by using all available computer capabilities. In addition to the above mention features, Dot net includes other API’s and model to improve and enhance the framework What does the Dot Net Developer do? A Dot Net Developer is responsible for designing, tailoring, and developing software applications according to the business needs. In addition to the determination and analysis of prerequisites for the software, he is also responsible for support as well as continuous development. These are the basic responsibilities of the Dot Net Developer. And it may vary from company to company. In some cases, the role of the dot net developer may also vary from project to project. This is the basic overview of the Dot Net platform. I hope you people have got an overview of .Net and its component. Moreover, you people can get hands-on experience on the .Net framework by live industry experts with live use cases at Dot Net Online Course. In the upcoming post of this blog, I'll be sharing with you the details of the Dot Net components elaborately. Meanwhile, have a glance at out Dot Net interview questions and get placed in your dream firm
Continue reading
What is Power Shell?
In today’s world, there are several ways to interact and manage with the computer operating system. Some of them were the green screen, terminals, command-line interface, and the graphical user interfaces. Besides, there are some more other methods like application program interface (API) calls, and web-based management calls. Among those, the command line interface is capable of performing repetitive tasks quickly and accurately when managing a large number of the system. Hence, Microsoft has introduced shell scripting to meet the needs of the user and ensure that each task is done in the same manner. This article gives you a brief explanation of power shell regarding the need and application in real-time in the IT industry. What is a Power shell? Power shell is a Microsoft scripting and automation platform. It is both a scripting language and a command-line interface. This platform is built on the .Net framework. This Microsoft platform uses a small program called cmdlets. This platform is responsible for the configuration, administration, and management of heterogeneous environments in both standalone and networked topologies by utilizing the standard remoting protocols. Once you start working with a power shell, it provides a set of opportunities for simplifying the tasks and saving time. It does this, using a command-line shell and an associated scripting language. At the time of release, this powerful tool essentially replaces the command prompt to automate the batch process and create the customized system management tools. Today many operation teams like system administrators rely on 130+ command-line tools within PowerShell to streamline and scale the task in both local as well as the remote system. Do you want to expertise on this tool? then visit Power Shell Online Training Why Should you use a power shell? Powershell is a popular tool for many MSP because its scalability helps in simplifying the management task and generate insights into devices across the medium (or) large scale devices. Through power shell, you can transform your workflow to To automate the time-consuming task: With Cmdlets, you don’t have to perform the same task again and again, and even takes time for the manual configuration. For instance, you can use cmdlets like the Get command to search for other cmdlets. Besides commands like cmd-help responsible for discovering the syntax of the cmdlet, and uses the invoke-command to run the script locally, remotely (or) even in a batch control Provide Net wide around Powershell enables you to get around software (or) program limitation especially on a business-wide scale. For example, PowerShell is responsible for reconfiguring the default setting of a program across the entire network This might be useful if the business wants to roll a specific protocol to all its users using two-factor authentication (2FA) (or) change their passwords for every months. Scale your efforts across devices: Powershell can be a lifesaver if you want to run scripts across multiple computers, especially if some of them were remote devices. For an instance, if you are trying to implement a solution in a few devices, (or) servers at once, where you don’t have to log in on multiple servers at once. Moreover, this PowerShell is responsible to gather information across multiple devices at once and allows you to install updates, configure settings, gather the information that saves you hours of work and travel time Gain Visibility into information: The advantage of this platform is the accessibility of the computer file system. Powershell makes it hard to find data in files and the windows registry. Moreover, digital certificates are visible whether it is housed on one computer (or) many. And it allows you to export the data for reporting purposes. What you can do with the power shell? GUI’s are the form of wrapper that is responsible for running the code for certain actions like clicking the buttons. Here the underlying GUI codes need to the written for the GUI to function. With the utilization of power shellcode, companies can roll out the changes and updates and can test the GUI. Besides, it is tightly integrated with most of the Microsoft products. In some cases, products like Microsoft server 2016 and office 365 things cannot be done with GUI and only the power shell can do. Microsoft people have designed this tool as an open-source and cross-platform. And it incorporated its capabilities into several interfaces. This power shell has become a robust solution to automate a range of tedious (or) administrative tasks and then find the filter and export the information about the computer on a network. It does this by combining the commands called cmdlets and create scripts. For IT professionals like MSP, it makes sense to utilize the text-based command-line interfaces(CLI’s) to achieve more granular control over system management. Within the power shell, you can leverage the improved power shell access and control over the windows management instrumentation and the component object model to fine-tune the administrative management. This automation tool is greatly helpful for executing a typical management task. Besides, this power shell includes adding and deleting accounts, editing groups, and creating a list to view specific types of users (or) groups. Besides, this powerful tool has an integrated scripting environment (ISE), a graphic user interface that lets you run commands and create (or) test scripts. This interface lets you develop the scripts such as command collection, where you can add the logic for execution. This is particularly useful for system administrators who need to run the command sequences for system configuration. Likewise, there are multiple uses of power shell in the real-time industry. By reaching the end of this article, I hope you people have gained the best knowledge on power shell. You people can get more practical knowledge on PowerShell taught by real-time experts at power shell online Course. In the upcoming articles of this blog, I'll be sharing the details of more information on PowerShell.
Continue reading
Active Directory Interview Questions
Q.Define what is Active Directory ? Answer: Active Directory is a Meta Data. Active Directory is a data base which store a data base like your user information, computer information and also other network object info. It has capabilities to manage and administrate the complete Network which connect with AD. Q.What’s the difference between local, global and universal groups? Answer: Domain local groups assign access permissions to global domain groups for local domain resources. Global groups provide access to resources in other trusted domains. Universal groups grant access to resources in all trusted domains. Q.I am trying to create a new universal user group. Why can’t I? Answer: Universal groups are allowed only in native-mode Windows Server 2003 environments. Native mode requires that all domain controllers be promoted to Windows Server 2003 Active Directory. Q.What is an IP address? Answer: Every device connected to the public Internet is assigned a unique number known as an Internet Protocol (IP) address. IP addresses consist of four numbers separated by periods (also called a 'dotted-quad') and look something like 127.0.0.1. In computer networking, an Internet Protocol (IP) address consists of a numerical identification (logical address) that network management assigns to devices participating in a computer network utilizing the Internet Protocol for communication between its nodes.[Although computers store IP addresses as binary numbers, they often display them in more human-readable notations, such as 192.168.100.1 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6). The role of the IP address has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there. Q.What is subnet Mask ? Answer: A subnet (short for "subnetwork") is an identifiably separate part of an organization's network. Typically, a subnet may represent all the machines at one geographic location, in one building, or on the same local area network (LAN). Having an organization's network divided into subnets allows it to be connected to the Internet with a single shared network address. Without subnets, an organization could get multiple connections to the Internet, one for each of its physically separate subnetworks, but this would require an unnecessary use of the limited number of network numbers the Internet has to assign. It would also require that Internet routing tables on gateways outside the organization would need to know about and have to manage routing that could and should be handled within an organization. Q.What is ARP? What is ARP Cache Poisoning? Answer: Address Resolution Protocol (ARP) is a protocol for mapping an Internet Protocol address (IP address) to a physical machine address that is recognized in the local network. For example, in IP Version 4, the most common level of IP in use today, an address is 32 bits long. In an Ethernet local area network, however, addresses for attached devices are 48 bits long. (The physical machine address is also known as a Media Access Control or MAC address.) A table, usually called the ARP cache, is used to maintain a correlation between each MAC address and its corresponding IP address. ARP provides the protocol rules for making this correlation and providing address conversion in both directions. Q.How ARP Works ? Answer: When an incoming packet destined for a host machine on a particular local area network arrives at a gateway , the gateway asks the ARP program to find a physical host or MAC address that matches the IP address. The ARP program looks in the ARP cache and, if it finds the address, provides it so that the packet can be converted to the right packet length and format and sent to the machine. If no entry is found for the IP address, ARP broadcasts a request packet in a special format to all the machines on the LAN to see if one machine knows that it has that IP address associated with it. A machine that recognizes the IP address as its own returns a reply so indicating. ARP updates the ARP cache for future reference and then sends the packet to the MAC address that replied. Q.Define what is Active Directory Domain Services ? Answer: In Windows 2000 Server and Windows Server 2003, the directory service is named Active Directory. In Windows Server 2008 and Windows Server 2008 R2, the directory service is named Active Directory Domain Services (AD DS). The rest of this topic refers to AD DS, but the information is also applicable to Active Directory. Q.Define what is domain ? Answer: A domain is a set of network resources (applications, printers, and so forth) for a group of users. The user need only to log in to the domain to gain access to the resources, which may be located on a number of different servers in the network. The ‘domain’ is simply your computer address not to confused with an URL. A domain address might look something like 211.170.469. Q.Define what is domain controller ? Answer: A Domain controller (DC) is a server that responds to security authentication requests (logging in, checking permissions, etc.) within the Windows Server domain. A domain is a concept introduced in Windows NT whereby a user may be granted access to a number of computer resources with the use of a single username and password combination. Q.What is a default gateway? What happens if I don't have one? Answer: a gateway is a routing device that knows how to pass traffic between different subnets and networks. A computer will know some routes (a route is the address of each node a packet must go through on the Internet to reach a specific destination), but not the routes to every address on the Internet. It won't even know all the routes on the nearest subnets. A gateway will not have this information either, but will at least know the addresses of other gateways it can hand the traffic off to. Your default gateway is on the same subnet as your computer, and is the gateway your computer relies on when it doesn't know how to route traffic. The default gateway is typically very similar to your IP address, in that many of the numbers may be the same. However, the default gateway is not your IP address. To see what default gateway you are using, follow the steps below for your operating system. Q.What is a subnet? Answer: In computer networks based on the Internet Protocol Suite, a subnetwork, or subnet, is a portion of the network's computers and network devices that have a common, designated IP address routing prefix (cf. Classless Inter-Domain Routing, CIDR). A routing prefix is the sequence of leading bits of an IP address that precede the portion of the address used as host identifier (or rest field in early Internet terminology). Q.What is APIPA? What is Automatic Private IP Addressing (APIPA)? Answer: Windows 98, 98 SE, Me, and 2000 have an Automatic Private IP Addressing (APIPA) feature that will automatically assign an Internet Protocol address to a computer on which it installed. This occurs when the TCP/IP protocol is installed, set to obtain it's IP address automatically from a Dynamic Host Configuration Protocol server, and when there is no DHCP server present or the DHCP server is not available. The Internet Assigned Numbers Authority (IANA) has reserved private IP addresses in the range of 169.254.0.0 - 169.254.255.255 for Automatic Private IP Addressing. Q.What is an RFC? Name a few if possible (not necessarily the numbers, just the ideas behind them) What is RFC 1918? Answer: Address Allocation for Private Internets February 1996 capabilities of Internet Service Providers. Efforts are in progress within the community to find long term solutions to both of these problems. Meanwhile it is necessary to revisit address allocation procedures, and their impact on the Internet routing system. Q.What is CIDR? Answer: Short for Classless Inter-Domain Routing, an IP addressing scheme that replaces the older system based on classes A, B, and C. With CIDR, a single IP address can be used to designate many unique IP addresses. A CIDR IP address looks like a normal IP address except that it ends with a slash followed by a number, called the IP network prefix. For example: 172.200.0.0/16 The IP network prefix specifies how many addresses are covered by the CIDR address, with lower numbers covering more addresses. An IP network prefix of /12, for example, can be used to address 1,048,576 former Class C addresses. CIDR addresses reduce the size of routing tables and make more IP addresses available within organizations. CIDR is also called supernetting. Q.You have the following Network ID: 192.115.103.64/27. What is the IP range for your network? Answer: 192.115.103.65 to 192.115.103.94 Q.You have the following Network ID: 131.112.0.0. You need at least 500 hosts per network. How many networks can you create? What subnet mask will you use? Answer: No of networks = 128 , I User Subnet Mask = 255.255.254.0 Q.You need to view at network traffic. What will you use? Name a few tools ? Answer: Monitoring network traffic tool Q.How do I know the path that a packet takes to the destination? Answer: use "tracert" command-line Q.What does the ping 192.168.0.1 -l 1000 -n 100 command do? Answer: The ping command will send roundtrip packets to a destination ( other PC, router, printer, etc.) and see how long it takes. The 192.168.0.1 is the destination ( which, by the way is a typical default IP address of a router. ) The -l 1000 is how big the packet should be in bytes. The default is 32, if the -l parameter is not used. And the -n 100 is saying to send, it 100 times. The default is 4, when this parameter is not used. Q.What is DHCP? What are the benefits and drawbacks of using it? Answer: DHCP is Dynamic Host Configuration Protocol. In a networked environment it is a method to assign an 'address' to a computer when it boots up. Benefit: A system administrator need not worry about computers being able to access networked resources Q.Benefits of using DHCP Answer: DHCP provides the following benefits for administering your TCP/IP-based network: Safe and reliable configuration DHCP avoids configuration errors caused by the need to manually type in values at each computer. Also, DHCP helps prevent address conflicts caused by a previously assigned IP address being reused to configure a new computer on the network. Reduces configuration management Using DHCP servers can greatly decrease time spent configuring and reconfiguring computers on your network. Servers can be configured to supply a full range of additional configuration values when assigning address leases. These values are assigned using DHCP options. Also, the DHCP lease renewal process helps assure that where client configurations need to be updated often (such as users with mobile or portable computers who change locations frequently), these changes can be made efficiently and automatically by clients communicating directly with DHCP servers. Q.Describe the steps taken by the client and DHCP server in order to obtain an IP address. ? Answer: HCP uses a client-server model. The network administrator establishes one or more DHCP servers that maintain TCP/IP configuration information and provide it to clients. The server database includes the following: Valid configuration parameters for all clients on the network. Valid IP addresses maintained in a pool for assignment to clients, plus reserved addresses for manual assignment. Duration of a lease offered by the server. The lease defines the length of time for which the assigned IP address can be used. With a DHCP server installed and configured on your network, DHCP-enabled clients can obtain their IP address and related configuration parameters dynamically each time they start and join your network. DHCP servers provide this configuration in the form of an address-lease offer to requesting clients. Q.What is the DHCPNACK and when do I get one? Name 2 scenarios.? What does DHCPNACK stand for? Answer: DHCP (Dynamic Host Configuration Protocol) Negative Acknowledgment Q.What ports are used by DHCP and the DHCP clients? Answer: Requests are on UDP port 68, Server replies on UDP 6 Q.Describe the process of installing a DHCP server in an AD infrastructure. . Answer: Open Windows Components Wizard. Under Components, scroll to and click Networking Services. Click Details. Under Subcomponents of Networking Services, click Dynamic Host Configuration Protocol (DHCP), and then click OK. Click Next. If prompted, type the full path to the Windows Server 2003 distribution files, and then click Next. Required files are copied to your hard disk. To authorize a DHCP server in Active Directory Open DHCP. In the console tree, click DHCP. On the Action menu, click Manage authorized servers. The Manage Authorized Servers dialog box appears. Click Authorize. When prompted, type the name or IP address of the DHCP server to be authorized, and then click OK. Active directory interview questions Q.What is DHCPINFORM? Answer: DHCPInform is a DHCP message used by DHCP clients to obtain DHCP options. While PPP remote access clients do not use DHCP to obtain IP addresses for the remote access connection, Windows 2000 and Windows 98 remote access clients use the DHCPInform message to obtain DNS server IP addresses, WINS server IP addresses, and a DNS domain name. The DHCPInform message is sent after the IPCP negotiation is cThe DHCPInform message received by the remote access server is then forwarded to a DHCP server. The remote access server forwards DHCPInform messages only if it has been configured with the DHCP Relay Agent Q.Describe the integration between DHCP and DNS? Answer: Traditionally, DNS and DHCP servers have been configured and managed one at a time. Similarly, changing authorization rights for a particular user on a group of devices has meant visiting each one and making configuration changes. DHCP integration with DNS allows the aggregation of these tasks across devices, enabling a company's network services to scale in step with the growth of network users, devices, and policies, while reducing administrative operations and costs. This integration provides practical operational efficiencies that lower total cost of ownership. Creating a DHCP network automatically creates an associated DNS zone, for example, reducing the number of tasks required of network administrators. And integration of DNS and DHCP in the same database instance provides unmatched consistency between service and management views of IP address-centric network services data Q.What is the BOOTP protocol used for, where might you find it in Windows network infrastructure? Answer: In computing, Bootstrap Protocol, or BOOTP, is a UDP network protocol used by a network client to obtain its IP address automatically. This is usually done during the bootstrap process when a computer is starting up. The BOOTP servers assign the IP address to each client from a pool of addresses. We can find, Bootstrap Protocol in DHCP Pool configuration in CSCO Switchers and Router. Q.DNS zones – describe the differences between the 3 types. Answer: DNS stands for Distributed Name System. A DNS server resolves a name to an IP address, as stated in an earlier answer, but it can also point to multiple IP addresses for load balancing, or for backup servers if one or more is offline or not accepting connections. Individual organizations may have their own DNS servers for their local Intranet. Some sites have their own DNS server to switch between subdomains within them. For example, a site such as Blogspot can have subdomains come and go quite frequently. Rather than force every DNS server to update their own databases whenever someone creates a new blog, Blogspot could maintain their own DNS server to resolve names within the blogspot.com domain, e.g., to distinguish between myblog.blogspot.com and yourblog.blogspot.com ... their DNS server would be queried once blogspot.com is resolved, and it would be responsible for resolving myblog vs. yourblog. The following are the three main components of DNS: Domain name space and associated resource records (RRs) A distributed database of name-related information. DNS Name Servers Servers that hold the domain name space and RRs, and that answer queries from DNS clients. DNS Resolvers The facility within a DNS client that contacts DNS name servers and issues name queries to obtain resource record information. DNS Zones A DNS server that has complete information for part of the DNS name space is said to be the authority for that part of the name space. This authoritative information is organized into units called zones, which are the main units of replication in DNS. A zone contains one or more RRs for one or more related DNS domains. The following are the three DNS zone types implemented in Windows 2000: Standard Primary Holds the master copy of a zone and can replicate it to secondary zones. All changes to a zone are made on the standard primary. Standard Secondary Contains a read-only copy of zone information that can provide increased performance and resilience. Information in a primary zone is replicated to the secondary by use of the zone transfer mechanism. Active Directory-integrated A Microsoft proprietary zone type, where the zone information is held in the Windows 2000 Active Directory (AD) and replicated using AD replication. Q.DNS record types – describe the most important ones. DNS Resource Records and what Are Resource Records? Answer: An RR is information related to a DNS domain; for example, the host record defining a host IP address. Each RR will contain a common set of information, as follows: Owner Indicates the DNS domain in which the resource record is found. TTL The length of time used by other DNS servers to determine how long to cache information for a record before discarding it. For most RRs, this field is optional. The TTL value is measured in seconds, with a TTL value of 0 indicating that the RR contains volatile data that's not to be cached. As an example, SOA records have a default TTL of 1 hour. This prevents these records from being cached by other DNS servers for a longer period, which would delay the propagation of changes. Class For most RRs, this field is optional. Where it's used, it contains standard mnemonic text indicating the class of an RR. For example, a class setting of IN indicates the record belongs to the Internet (IN) class. At one time there were multiple classes (such as CH for Chaos Net), but today, only the IN class is used. Type This required field holds a standard mnemonic text indicating the type for an For example, a mnemonic of A indicates that the RR stores host address information. Record-Specific Data This is a variable-length field containing information describing the resource. This information's format varies according to the type and class of the RR. Q.Describe the process of working with an external domain name Answer: If it is not possible for you to configure your internal domain as a subdomain of your external domain, use a stand-alone internal domain. This way, your internal and external domain names are unrelated. For example, an organization that uses the domain name contoso.com for their external namespace uses the name corp.internal for their internal namespace. The advantage to this approach is that it provides you with a unique internal domain name. The disadvantage is that this configuration requires you to manage two separate namespaces. Also, using a stand-alone internal domain that is unrelated to your external domain might create confusion for users because the namespaces do not reflect a relationship between resources within and outside of your network. In addition, you might have to register two DNS names with an Internet name authority if you want to make the internal domain publicly accessible. Q.Describe the importance of DNS to AD. Answer: When Microsoft began development on Active Directory, full compatibility with the domain name system (DNS) was a critical priority. Active Directory was built from the ground up not just to be fully compatible with DNS but to be so integrated with it that one cannot exist without the other. Microsoft's direction in this case did not just happen by chance, but because of the central role that DNS plays in Internet name resolution and Microsoft's desire to make its product lines embrace the Internet. While fully conforming to the standards established for DNS, Active Directory can expand upon the standard feature set of DNS and offer some new capabilities such as AD-Integrated DNS, which greatly eases the administration required for DNS environments. In addition, Active Directory can easily adapt to exist in a foreign DNS environment, such as Unix BIND, as long as the BIND version is 8.2.x or higher. When Microsoft began development on Active Directory, full compatibility with the domain name system (DNS) was a critical priority. Active Directory was built from the ground up not just to be fully compatible with DNS but to be so integrated with it that one cannot exist without the other. Microsoft's direction in this case did not just happen by chance, but because of the central role that DNS plays in Internet name resolution and Microsoft's desire to make its product lines embrace the Internet. While fully conforming to the standards established for DNS, Active Directory can expand upon the standard feature set of DNS and offer some new capabilities such as AD-Integrated DNS, which greatly eases the administration required for DNS environments. In addition, Active Directory can easily adapt to exist in a foreign DNS environment, such as Unix BIND, as long as the BIND version is 8.2.x or higher. Q.What does "Disable Recursion" in DNS mean? Answer: In the Windows 2000/2003 DNS console (dnsmgmt.msc), under a server's Properties -> Forwarders tab is the setting Do not use recursion for this domain. On the Advanced tab you will find the confusingly similar option Disable recursion (also disables forwarders). Recursion refers to the action of a DNS server querying additional DNS servers (e.g. local ISP DNS or the root DNS servers) to resolve queries that it cannot resolve from its own database. So what is the difference between these settings? The DNS server will attempt to resolve the name locally, then will forward requests to any DNS servers specified as forwarders. If Do not use recursion for this domain is enabled, the DNS server will pass the query on to forwarders, but will not recursively query any other DNS servers (e.g. external DNS servers) if the forwarders cannot resolve the query. If Disable recursion (also disables forwarders) is set, the server will attempt to resolve a query from its own database only. It will not query any additional servers. If neither of these options is set, the server will attempt to resolve queries normally: the local database is queried if an entry is not found, the request is passed to any forwarders that are set if no forwarders are set, the server will query servers on the Root Hints tab to resolve queries beginning at the root domains. Q.What is a "Single Label domain name" and what sort of issues can it cause? Answer: Single-label names consist of a single word like "contoso". Single-label DNS names cannot be registered by using an Internet registrar. Client computers and domain controllers that joined to single-label domains require additional configuration to dynamically register DNS records in single-label DNS zones. • Client computers and domain controllers may require additional configuration to resolve DNS queries in single-label DNS zones. By default, Windows Server 2003-based domain members, Windows XP-based domain members, and Windows 2000-based domain members do not perform dynamic updates to single-label DNS zones. Some server-based applications are incompatible with single-label domain names. Application support may not exist in the initial release of an application, or support may be dropped in a future release. For example, Microsoft Exchange Server 2007 is not supported in environments in which single-label DNS is used. Some server-based applications are incompatible with the domain rename feature that is supported in Windows Server 2003 domain controllers and in Windows Server 2008 domain controllers . These incompatibilities either block or complicate the use of the domain rename feature when you try to rename a single-label DNS name to a fully qualified domain name. Q.What is the "in-addr.arpa" zone used for? Answer: In a Domain Name System (DNS) environment, it is common for a user or an application to request a Reverse Lookup of a host name, given the IP address. This article explains this process. The following is quoted from RFC 1035: "The Internet uses a special domain to support gateway location and Internet address to host mapping. Other classes may employ a similar strategy in other domains. The intent of this domain is to provide a guaranteed method to perform host address to host name mapping, and to facilitate queries to locate all gateways on a particular network on the Internet. "The domain begins at IN-ADDR.ARPA and has a substructure which follows the Internet Addressing structure. "Domain names in the IN-ADDR.ARPA domain are defined to have up to four labels in addition to the IN-ADDR.ARPA suffix. Each label represents one octet of an Internet address, and is expressed as a character string for a decimal value in the range 0-255 (with leading zeros omitted except in the case of a zero octet which is represented by a single zero). "Host addresses are represented by domain names that have all four labels specified." Reverse Lookup files use the structure specified in RFC 1035. For example, if you have a network which is 150.10.0.0, then the Reverse Lookup file for this network would be 10.150.IN-ADDR.ARPA. Any hosts with IP addresses in the 150.10.0.0 network will have a PTR (or 'Pointer') entry in 10.150.IN - ADDR.ARPA referencing the host name for that IP address. A single IN- ADDR.ARPA file may contain entries for hosts in many domains. Consider the following scenario. There is a Reverse Lookup file 10.150.IN-ADDR.ARPA with the following contents: Exp : 1.20 IN PTR WS1.ACME.COM. Active Directory Interview Questions Active Directory Training Q.What are the requirements from DNS to support AD? Answer: When you install Active Directory on a member server, the member server is promoted to a domain controller. Active Directory uses DNS as the location mechanism for domain controllers, enabling computers on the network to obtain IP addresses of domain controllers. During the installation of Active Directory, the service (SRV) and address (A) resource records are dynamically registered in DNS, which are necessary for the successful functionality of the domain controller locator (Locator) mechanism. To find domain controllers in a domain or forest, a client queries DNS for the SRV and A DNS resource records of the domain controller, which provide the client with the names and IP addresses of the domain controllers. In this context, the SRV and A resource records are referred to as Locator DNS resource records. When adding a domain controller to a forest, you are updating a DNS zone hosted on a DNS server with the Locator DNS resource records and identifying the domain controller. For this reason, the DNS zone must allow dynamic updates (RFC 2136) and the DNS server hosting thatzone must support the SRV resource records (RFC 2782) to advertise the Active Directory directory service. For more information about RFCs, see DNS RFCs. If the DNS server hosting the authoritative DNS zone is not a server running Windows 2000 or Windows Server 2003, contact your DNS administrator to determine if the DNS server supports the required standards. If the server does not support the required standards, or the authoritative DNS zone cannot be configured to allow dynamic updates, then modification is required to your existing DNS infrastructure. For more information, see Checklist: Verifying DNS before installing Active Directory and Using the Active Directory Installation Wizard. Important The DNS server used to support Active Directory must support SRV resource records for the Locator mechanism to function. For more information, see Managing resource records. It is recommended that the DNS infrastructure allows dynamic updates of Locator DNS resource records (SRV and A) before installing Active Directory, but your DNS administrator may add these resource records manually after installation. After installing Active Directory, these records can be found on the domain controller in the following location: systemroot\System32\Config\Netlogon.dns Q.How do you manually create SRV records in DNS? Answer: this is on windows server go to run ---> dnsmgmt.msc rightclick on the zone you want to add srv record to and choose "other new record" and choose service location(srv)..... Q.Name 3 benefits of using AD-integrated zones. Answer: Active Directory–integrated DNS enables Active Directory storage and replication of DNS zone databases. Windows 2000 DNS server, the DNS server that is included with Windows 2000 Server, accommodates storing zone data in Active Directory. When you configure a computer as a DNS server, zones are usually stored as text files on name servers that is, all of the zones required by DNS are stored in a text file on the server computer. These text files must be synchronized among DNS name servers by using a system that requires a separate replication topology and schedule called a zone transfer However, if you use Active Directory–integrated DNS when you configure a domain controller as a DNS name server, zone data is stored as an Active Directory object and is replicated as part of domain replication. Q.What are the benefits of using Windows 2003 DNS when using AD-integrated zones? Answer: If your DNS topology includes Active Directory, use Active Directory–integrated zones. Active Directory–integrated zones enable you to store zone data in the Active Directory database. Zone information about any primary DNS server within an Active Directory– integrated zone is always replicated. Because DNS replication is single-master, a primary DNS server in a standard primary DNS zone can be a single point of failure. In an Active Directory–integrated zone, a primary DNS server cannot be a single point of failure because Active Directory uses multimaster replication. Updates that are made to any domain controller are replicated to all domain controllers and the zone information about any primary DNS server within an Active Directory–integrated zone is always replicated. Active Directory–integrated zones: Enable you to secure zones by using secure dynamic update. Provide increased fault tolerance. Every Active Directory–integrated zone can be replicated to all domain controllers within the Active Directory domain or forest. All DNS servers running on these domain controllers can act as primary servers for the zone and accept dynamic updates. Enable replication that propagates changed data only, compresses replicated data, and reduces network traffic. If you have an Active Directory infrastructure, you can only use Active Directory–integrated zones on Active Directory domain controllers. If you are using Active Directory–integrated zones, you must decide whether or not to store Active Directory–integrated zones in the application directory partition. You can combine Active Directory–integrated zones and file-based zones in the same design. For example, if the DNS server that is authoritative for the private root zone is running on an operating system other than Windows Server 2003 or Windows 2000, it cannot act as an Active Directory domain controller. Therefore, you must use file-based zones on that server. However, you can delegate this zone to any domain controller running either Windows Server 2003 or Windows 2000. You installed a new AD domain and the new (and first) DC has not registered its SRV records in DNS. Name a few possible causes. Q.What are the benefits and scenarios of using Stub zones? Answer: Understanding stub zones A stub zone is a copy of a zone that contains only those resource records necessary to identify the authoritative Domain Name System (DNS) servers for that zone. A stub zone is used to resolve names between separate DNS namespaces. This type of resolution may be necessary when a corporate merger requires that the DNS servers for two separate DNS namespaces resolve names for clients in both namespaces. A stub zone consists of: The start of authority (SOA) resource record, name server (NS) resource records, and the glue A resource records for the delegated zone. The IP address of one or more master servers that can be used to update the stub zone. The master servers for a stub zone are one or more DNS servers authoritative for the child zone, usually the DNS server hosting the primary zone for the delegated domain name. Using stub zones Updated: January 21, 2005 Using stub zones Use stub zones to Keep delegated zone information current. By updating a stub zone for one of its child zones regularly, the DNS server hosting both the parent zone and the stub zone will maintain a current list of authoritative DNS servers for the child zone. Improve name resolution. Stub zones enable a DNS server to perform recursion using the stub zone's list of name servers without needing to query the Internet or internal root server for the DNS namespace. Simplify DNS administration. By using stub zones throughout your DNS infrastructure, you can distribute a list of the authoritative DNS servers for a zone without using secondary zones. However, stub zones do not serve the same purpose as secondary zones and are not an alternative when considering redundancy and load sharing. There are two lists of DNS servers involved in the loading and maintenance of a stub zone: The list of master servers from which the DNS server loads and updates a stub zone. A master server may be a primary or secondary DNS server for the zone. In both cases, it will have a complete list of the DNS servers for the zone. list of the authoritative DNS servers for a zone. This list is contained in the stub zone using name server (NS) resource records. When a DNS server loads a stub zone, such as widgets.example.com, it queries the master servers, which can be in different locations, for the necessary resource records of the authoritative servers for the zone widgets.example.com. The list of master servers may contain a single server or multiple servers and can be changed anytime. For more information, see Configure a stub zone for local master servers. Q.What are the benefits and scenarios of using Conditional Forwarding? Answer: Rather than having a DNS server forward all queries it cannot resolve to forwarders, the DNS server can forward queries for different domain names to different DNS servers according to the specific domain names that are contained in the queries. Forwarding according to these domain-name conditions improves conventional forwarding by adding a second condition to The forwarding process. A conditional forwarder setting consists of a domain name and the IP address of one or more DNS servers. To configure a DNS server for conditional forwarding, a list of domain names is set up on the Windows Server 2003-based DNS server along with the DNS server IP address. When a DNS client or server performs a query operation against a Windows Server 2003-based DNS server that is configured for forwarding, the DNS server looks to see if the query can be resolved by using its own zone data or the zone data that is stored in its cache, and then, if the DNS server is configured to forward for the domain name that is designated in the query (a match), the query is forwarded to the IP address of a DNS Server that is associated with the domain name. If the DNS server has no domain name listed for the name that is designated in the query, it attempts to resolve the query by using standard recursion. Q.What are the differences between Windows Clustering, Network Load Balancing and Round Robin, and scenarios for each use? Answer: Cluster technologies are becoming increasingly important to ensure service offerings meet the requirements of the enterprise. Windows 2000 and Windows Server 2003 support three cluster technologies to provide high availability, reliability and scalability. These technologies are: NLB, CLB and Server cluster. These technologies have a specific purpose and are designed to meet different requirements. Server cluster provides failover support for applications and services that require high availability, scalability and reliability, and is ideally suited for back-end applications and services, such as database servers. Server cluster can use various combinations of active and passive nodes to provide failover support for mission critical applications and services. NLB provides failover support for IP-based applications and services that require high scalability and availability, and is ideally suited for Web tier and front-end NLB clusters can use multiple adapters and different broadcast methods to assist in the load balancing of TCP, UDP and GRE traffic requests. Component Load Balancing provides dynamic load balancing of middle-tier application components that use COM+ and is ideally suited for application servers. CLB clusters use two clusters. The routing cluster can be configured as a routing list on the front-end Web servers or as separate servers that run Server cluster. Cluster technologies by themselves are not enough to ensure that high availability goals can be met. Multiple physical locations may be necessary to guard against natural disasters and other events that may cause complete service outage. Effective processes and procedures, in addition to good architecture, are the keys to high availability. Round robin is a local balancing mechanism used by DNS servers to share and distribute network resource loads. You can use it to rotate all resource record (RR) types contained in a query answer if multiple RRs are found. By default, DNS uses round robin to rotate the order of RR data returned in query answers where multiple RRs of the same type exist for a queried DNS domain name. This feature provides a simple method for load balancing client use of Web servers and other frequently queried multihomed computers. If round robin is disabled for a DNS server, the order of the response for these queries is based on a static ordering of RRs in the answer list as they are stored in the zone (either its zone file or Active Directory). Q.How do I clear the DNS cache on the DNS server? Answer: To clear DNS Cache do the following: Start Run Type "cmd" and press enter In the command window type "ipconfig /flushdns" If done correctly it should say "Successfully flushed the DNS Resolver Cache." Q.What is the 224.0.1.24 address used for? Answer: WINS server group address. Used to support auto discovery and dynamic configuration of replication for WINS servers. For more information, see WINS replication overview WINS server group address. Used to support auto discovery and dynamic configuration of replication for WINS servers. For more information, see WINS replication overview Q.What is WINS and when do we use it? Answer: Microsoft Windows Internet Name Service (WINS) is an RFC-compliant NetBIOS name- to-IP-address mapping service. WINS allows Windows-based clients to easily locate resources on Transmission Control Protocol/Internet Protocol (TCP/IP) networks. WINS servers maintain databases of static and dynamic resource name—to-IP-address mappings. Because the Microsoft WINS database supports dynamic name and IP address entries, WINS can be used with Dynamic Host Configuration Protocol (DHCP) services to provide easy configuration and administration of Windows-based TCP/IP networks. WINS servers provide the following benefits: Dynamic database that supports NetBIOS computer name registration and name resolution in an environment where the dynamic TCP/IP configuration of DHCP-enabled clients is dynamically configured for TCP/IP. Centralized management of the NetBIOS computer name database and its replication to other WINS servers. Reduction of NetBIOS name query IP broadcast traffic. Support for Windows-based clients (including Windows NT Server, Windows NT Workstation, Windows 95, Windows for Workgroups, and LAN Manager 2.x). Support for transparent browsing across routers for Windows NT Server, Windows NT Workstation, Windows 95, and Windows for Workgroups clients. to the WINS server. The WINS server returns the destination computer's IP address to the original computer without the need for broadcast traffic. The second reason for using WINS is that it's dynamic. As computers attach to and detach from the network, the WINS databases are updated automatically. This means that you don't have to create a static LMHOST file that the computers can read to determine IP addresses. Q.Can you have a Microsoft-based network without any WINS server on it? What are the "considerations" regarding not using WINS? Answer: A given network should have one or more WINS servers that WINS clients can contact to resolve a computer name to an IP address. It is desirable to have multiple WINS servers installed on an intranet for the following reasons: To distribute the NetBIOS computer name query and registration processing load To provide WINS database redundancy, backup, and disaster recovery Q.Describe the differences between WINS push and pull replications and Microsoft WINS Server Push and Pull Partners Answer: Microsoft WINS servers communicate with other Microsoft WINS servers to fully replicate their databases with each other. This ensures that a name registered with one WINS server is replicated to all other Microsoft WINS servers within the intranet, providing a replicated and enterprise-wide database. When multiple WINS servers are used, each WINS server is configured as a pull or push partner of at least one other WINS server. The following table describes the pull and push partner types of replication partners. Q.What is the difference between tombstoning a WINS record and simply deleting it? Answer: Through replication and convergence, the record ownership will change from WINS server to WINS server. Eventually, you may many end up with a scenario where a WINS server that owns a record and its direct replication partner has a replica of the record but does not own the record. The problem occurs when no domain controllers refresh the record on the remote WINS server, the records will expire, become tombstoned, and be scavenged out of the database. The following is an example of what could happen Q.Name the NetBIOS names you might expect from a Windows 2003 DC that is registered in WINS. Answer: If a Microsoft Windows NT 3.5-based client computer does not receive a response from the primary Windows Internet Name Service (WINS) server, it queries the secondary WINS server to resolve a NetBIOS name. However, if a NetBIOS name is not found in the primary WINS server's database, a Windows NT 3.5-based client does not query the secondary WINS server. In Microsoft Windows NT 3.51 and later versions of the Windows operating system, a Windows-based client does query the secondary WINS server if a NetBIOS name is not found in the primary WINS server's database. Clients that are running the following versions In Windows NT 3.51, Windows NT 4, Windows 95, Windows 98, Windows 2000, Windows Millennium Edition, Windows XP, and Windows Server 2003, you can specify up to 12 WINS servers. Additional WINS servers are useful when a requested name is not found in the primary WINS server's database or in the secondary WINS server's database. In this situation, the WINS client sends a request to the next server in the list. You can find a list of additional server names in the following registry subkey, where adapter_guid represents the GUID of your adapter: HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\NetBT\Parameters\Interfaces \Tcpip_ Note Make sure that the NameServerList registry entry in this subkey has a multistring type (REG_MULTI_SZ). Q.What is TCP/IP and Explain some TCP /IP Protocol ? Answer: TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. It can also be used as a communications protocol in a private network (either an intranet or an extranet). When you are set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program just as every other computer that you may send messages to or get information from also has a copy of TCP/IP. TCP/IP is a two -layer program. The higher layer, Transmission Control Protocol, manages the assembling of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer that reassembles the packets into the original message. The lower layer, Internet Protocol, handles the address part of each packet so that it gets to the right destination. Each gateway computer on the network checks this address to see where to forward the message. Even though some packets from the same message are routed differently than others, they'll be reassembled at the destination. TCP/IP uses the client/server model of communication in which a computer user (a client) requests and is provided a service (such as sending a Web page) by another computer (a server) in the network. TCP/IP communication is primarily point-to -point, meaning each communication is from one point (or host computer) in the network to another point or host computer. TCP/IP and the higher-level applications that use it are collectively said to be "stateless" because each client request is considered a new request unrelated to any previous one (unlike ordinary phone conversations that require a dedicated connection for the call duration). Being stateless frees network paths so that everyone can use them continuously. (Note that the TCP layer itself is not stateless as far as any one message is concerned. Its connection remains in place until all packets in a message have been received.) Many Internet users are familiar with the even higher layer application protocols that use TCP/IP to get to the Internet. These include the World Wide Web's Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), Telnet (Telnet ) which lets you logon to remote computers, and the Simple Mail Transfer Protocol (SMTP). These and other protocols are often packaged together with TCP/IP as a "suite." Personal computer users with an analog phone modem connection to the Internet usually get to the Internet through the Serial Line Internet Protocol (SLIP) or the Point- to-Point Protocol (PPP). These protocols encapsulate the IP packets so that they can be sent over the dial-up phone connection to an access provider's modem. Protocols related to TCP/IP include the User Datagram Protocol (UDP), which is used instead of TCP for special purposes. Other protocols are used by network host computers for exchanging router information. These include the Internet Control Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the Exterior Gateway Protocol (EGP), and the Border Gateway Protocol (BGP). Q.What is NetBios ? Answer: Netbios.exe is a NetBIOS programming sample that implements an echo server and client. The sample illustrates how a client and server should be written in order to make the application protocol and LAN Adapter (LANA) independent. It also shows how to avoid common mistakes programmers frequently make when writing NetBIOS applications under WIN32. Q.Describe the role of the routing table on a host and on a router. Answer: In internetworking, the process of moving a packet of data from source to destination. Routing is usually performed by a dedicated device called a router. Routing is a key feature of the Internet because it enables messages to pass from one computer to another and eventually reach the target machine. Each intermediary computer performs routing by passing along the message to the next computer. Part of this process involves analyzing a routing table to determine the best path. (row´ter) (n.) A device that forwards data packets along networks. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP’s network. Routers are located at gateways, the places where two or more networks connect. Routers use headers and forwarding tables to determine the best path for forwarding the packets, and they use protocols such as ICMP to communicate with each other and configure the best route between any two hosts. Very little filtering of data is done through routers. Q.Defined OSI model ? Answer: The 'Open Systems Interconnection Basic Reference Model' (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. It was developed as part of the Open Systems Interconnection (OSI) initiative. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data-Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. The Physical Layer defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. This includes the layout of pins, voltages, cable specifications, Hubs, repeaters, network adapters, Host Bus Adapters (HBAs used in Storage Area Networks) and more. To understand the function of the Physical Layer in contrast to the functions of the Data Link Layer, think of the Physical Layer as concerned primarily with the interaction of a single device with a medium, where the Data Link Layer is concerned more with the interactions of multiple devices (i.e., at least two) with a shared medium. The Physical Layer will tell one device how to transmit to the medium, and another device how to receive from it (in most Cases it does not tell the device how to connect to the medium). Obsolescent Physical Layer standards such as RS-232 do use physical wires to control access to the medium. 2 Data Link Layer The Data Link Layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the Physical Layer. Originally, this layer was intended for point-to-point and point-to-multipoint media, characteristic of wide area media in the telephone system. Local area network architecture, which included broadcast-capable multiaccess media, was developed independently of the ISO work, in IEEE Project 802. IEEE work assumed sublayering and management functions not required for WAN use. In modern practice, only error detection, not flow control using sliding window, is present in modern data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used for most protocols on Ethernet, and, on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the Transport Layer by protocols such as TCP, but is still used in niches where X.25 offers performance advantages. Both WAN and LAN services arrange bits, from the Physical Layer, into logical sequences called frames. Not all Physical Layer bits necessarily go into frames, as some of these bits are purely intended for Physical Layer functions. For example, every fifth bit of the FDDI bit stream is not used by the Data Link Layer. 3 The Network Layer provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks, while maintaining the quality of service requested by the Transport Layer. The Network Layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. Routers operate at this layer—sending data throughout the extended network and making the Internet possible. This is a logical addressing scheme – values are chosen by the network engineer. The addressing scheme is hierarchical.The best-known example of a Layer 3 protocol is the Internet Protocol (IP). It manages the connectionless transfer of data one hop at a time, from end system to ingress router, router to router, and from egress router to destination end system. It is not responsible for reliable delivery to a next hop, but only for the detection of errored packets so they may be discarded. When the medium of the next hop cannot accept a packet in its current length, IP is responsible for fragmenting into sufficiently small packets that the medium can accept it.A number of layer management protocols, a function defined in the Management Annex, ISO 7498/4, belong to the Network Layer. These include routing protocols, multicast group management, Network Layer information and error, and Network Layer address assignment. It is the function of the payload that makes these belong to the Network Layer, not the protocol that carries them. 4 Transport Layer The Transport Layer provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. The Transport Layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state and connection oriented. This means that the Transport Layer can keep track of the segments and retransmit those that fail. Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the Transport Layer, the best known examples of a Layer 4 protocol are the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). 5 The Session Layer controls the dialogues/connections (sessions) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for "graceful close" of sessions, which is a property of TCP, and also for session checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The Session Layer is commonly implemented explicitly in application environments that use remote procedure calls (RPCs). Layer 6: Presentation Layer The Presentation Layer establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the Presentation Service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol Data Units, and moved down the stack. The original presentation structure used the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII -coded file, or serializing objects and other data structures into and out of XML. ASN.1 has a set of cryptographic encoding rules that allows end-to-end encryption between application entities. 7 Application Layer The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. When determining resource availability, the application layer must decide whether sufficient network resources for the requested communication exist. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Some examples of application layer implementations include Telnet, File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP). Q.Define what is LDAP ? Answer: Lightweight Directory Access Protocol LDAP is the industry standard directory access protocol, making Active Directory widely accessible to management and query applications. Active Directory supports LDAPv3 and LDAPv2. Q.What are routing protocols? Why do we need them? Name a few. Answer: routing protocol is a protocol that specifies how routers communicate with each other to disseminate information that allows them to select routes between any two nodes on a network. Typically, each router has a prior knowledge only of its immediate neighbors. A routing protocol shares this information so that routers have knowledge of the network topology at large. For a discussion of the concepts behind routing protocols, see: Routing. The term routing protocol may refer more specifically to a protocol operating at Layer 3 of the OSI model which similarly disseminates topology information between routers. Many routing protocols used in the public Internet are defined in documents called RFCs. There are three major types of routing protocols, some with variants: link-state routing protocols, path vector protocols and distance vector routing protocols. The specific characteristics of routing protocols include the manner in which they either prevent routing loops from forming or break routing loops if they do form, and the manner in which they determine preferred routes from a sequence of hop costs and other preference factors. IGRP (Interior Gateway Routing Protocol) EIGRP (Enhanced Interior Gateway Routing Protocol) OSPF (Open Shortest Path First) RIP (Routing Information Protocol) IS-IS (Intermediate System to Intermediate System) Q.What are router interfaces? What types can they be? Answer: The interfaces on a router provide network connectivity to the router. The console and auxiliary ports are used for managing the router. Routers also have ports for LAN and WAN connectivity. The LAN interfaces usually include Ethernet, Fast Ethernet, Fiber Distributed Data Interface (FDDI), or Token Ring. The AUI port is used to provide LAN connectivity. You can use a converter to attach your LAN to the router. Some higher-end routers have separate interfaces for ATM(Asynchronous Transfer Mode) as well. Sync and Async serial interfaces are used for WAN connectivity. ISDN (Integrated Services Digital Network) interfaces are used to provide the ISDN connectivity. Using ISDN, you can transmit both voice and data. Bas Topology To prevent collisions senses multi access /collision detection CSMA/CD is used in Ethernet .one way transferring data . Ethernet is one of the earliest LAN technologies. An Ethernet LAN typically uses special grades of twisted pair cabling. Ethernet networks can also use coaxial cable, but this cable medium is becoming less common. The most commonly installed Ethernet systems are called 10BaseT. The router provides the interfaces for twisted pair cables. A converter can be attached to the AUI port of a router to connect to a 10base2, 10baseT, or 10base5 LAN interface. Ethernet and Token Ring use MAC addressing (physical addressing). The Ethernet interfaces on the router are E0, E1, E2, and so on. E stands for Ethernet, and the number that follows represents the port number. These interfaces provide connectivity to an Ethernet LAN. In a non-modular Cisco router, the Ethernet ports are named as above, but in modular routers they are named as E0/1, where E stands for Ethernet, 0 stands for slot number, and 1 stands for port number in that slot. Token Ring Topology Token Ring is the second most widely used LAN technology after Ethernet, where all computers are connected in a logical ring topology. Physically, each host attaches to an MSAU (Multistation Access Unit) in a star configuration. MSAU’s can be chained together to maintain the logical ring topology. An empty frame called a token is passed around the network. A device on the network can transmit data only when the empty token reaches the device. This eliminates collisions on a Token Ring network. Token Ring uses MAC addresses Just like any other LAN technology. The Token Ring interfaces on a non-modular router are To0, To1, To2 and so on. “To” stands for Token Ring and the number following “To” signifies the port number. In a modular router, “To” will be followed by the slot number/port number FDDI Fiber Distributed Data Interface (FDDI) is a LAN technology that uses fiber optic cable. FDDI is a ring topology that uses four-bit symbols rather than eight-bit octets in its frames. The 48-bit MAC addresses have 12 four -bit symbols for FDDI. FDDI is very fast and provides a data transfer rate of 100 Mbps and uses a token-passing mechanism to prevent collisions. FDDI uses two rings with their tokens moving in opposite directions to provide redundancy to the network. Usually only one ring is active at a given time. If one ring breaks, the other ring is used and the network does not experience downtime. FDDI interfaces on a non-modular Cisco router are F0, F1, F2 and so on. “F” stands for FDDI and the number following “F” signifies the port number. In a modular router, a slot number/port number will follow “F”. ISDN Integrated Services Digital Network (ISDN) is a set of ITU-T (Telecommunication Standardization Sector of the International Telecommunications Union) standards for digital transmission over ordinary telephone copper wire as well as over other media. ISDN provides the integration of both analog or voice data together with digital data over the same network. ISDN has two levels of service: Basic Rate Interface (BRI) Primary Rate Interface (PRI) The BRI interfaces for ISDN on a non -modular router are BRI0, BRI1, and so on, with the number following “BRI” signifying the port number. In a modular router, BRI is followed by the slot number/port number. Synchronous transmission signals Occur at the same clock rate and all clocks are based on a single reference clock. Since asynchronous transmission is a character-by-character transmission type, each character is delimited by a start and stop bit, therefore clocks are not needed in this type of transmission. Synchronous communication requires a response at the end of each exchange of frames, while asynchronous communications do not require responses. Support for the Synchronous Serial interface is supplied on the Multiport Communications Interface (CSC-MCI) and the Serial Port Communications Interface (CSC-SCI) network interface cards. The Asynchronous Serial interface is provided by a number of methods, including RJ-11, RJ-45, and 50-pin Telco connectors. Some ports can function both as Synchronous Serial interfaces and Asynchronous Serial interfaces. Such ports are called Async/Sync ports. The Async/Sync ports support Telco and RJ-11 connectors. Q.What is the real difference between NAT and PAT? Answer: Port Address Translation (PAT) is a special kind of Network Address Translation (NAT). It can provide an excellent solution for a company that has multiple systems that need to access the Internet but that has only a few public IP addresses. Let's take a look at the distinctions between NAT and PAT and see how they are typically used. Then, I'll show you how to configure PAT on a Cisco router. Understanding PAT and NAT Before discussing PAT, it will help to describe what NAT does in general. NAT was designed to be a solution to the lack of public IP addresses available on the Internet. The basic concept of NAT is that it allows inside/internal hosts to use the private address spaces (10/8, 172.16/12, and 192.168/16 networks—see RFC1918), go through the internal interface of a router running NAT, and then have the internal addresses translated to the router's public IP address on the external interface that connects to the Internet. If you dig into NAT a little deeper, you will discover that there are really three ways to configure it. From these configurations, you can perform a variety of functions. Q.What is VPN? What types of VPN does Windows 2000 and beyond work with natively? Answer: Microsoft defines a virtual private network as the extension of a private network that encompasses links across shared or public networks like the Internet. With a VPN, you can send data between two computers across a shared or public network in a manner that emulates point-to-point private link (such as a dial-up or long haul T-Carrier-based WAN link). Virtual private networking is the act of creating and configuring a virtual private network. To emulate a point-to-point link, data is encapsulated, or wrapped, with a header that provides routing information, which allows the data to traverse the shared or public network to reach its endpoint. To emulate a private link, the data is encrypted for confidentiality. Packets that are intercepted on the shared or public network are indecipherable without the encryption keys. The link in which the private data is encapsulated and encrypted is a VPN connection. There are two key VPN scenarios—remote access and site-to-site. In remote access, the communications are encrypted between a remote computer (the VPN client) and the remote access VPN gateway (the VPN server) to which it connects. In site-to-site (also known as router-to -router), the communications are encrypted between two routers (VPN gateways) that link two sites. Q.What are the benefits of using VPN connections? Answer: For remote access connections, an organization can use VPN connections to leverage the worldwide connectivity of the Internet and trade their direct-dial remote access solutions (and their corresponding equipment and maintenance costs) for a single connection to an Internet service provider (ISP) without sacrificing the privacy of a dedicated dial-up connection. For routed connections, an organization can use VPN connections to leverage the worldwide connectivity of the Internet and trade long-distance dial-up or leased lines for simple connections to an Internet service provider (ISP) without sacrificing the privacy of a dial-up or dedicated site-to-site link. Q.What is IAS? In what scenarios do we use it? Answer: IAS is the Windows implementation of a Remote Authentication Dial-In User Service (RADIUS) server and proxy in Windows Server 2003. In Windows Server 2008, the RADIUS server and proxy implementation is known as Network Policy Server (NPS). What is IAS? IAS is the Windows implementation of a Remote Authentication Dial-In User Service (RADIUS) server and proxy in Windows Server 2003. In Windows Server 2008, the RADIUS server and proxy implementation is known as Network Policy Server (NPS). Internet Authentication Service Internet Authentication Service (IAS) in Microsoft® Windows Server® 2003, Standard Edition; Windows Server 2003, Enterprise Edition; and Windows Server 2003, Datacenter Edition is the Microsoft implementation of a Remote Authentication Dial-in User Service (RADIUS) server and proxy. As a RADIUS server, IAS performs centralized connection authentication, authorization, and accounting for many types of network access including wireless, authenticating switch, and remote access dial-up and virtual private network (VPN) connections. As a RADIUS proxy, IAS forwards authentication and accounting messages to other RADIUS servers. RADIUS is an Internet Engineering Task Force (IETF) standard. For more detailed information, see Features of IAS To optimize IAS authentication and authorization response times and minimize network traffic, install IAS on a domain controller. When universal principal names (UPNs) or Windows Server 2003 domains are used, IAS uses the global catalog to authenticate users. To minimize the time it takes to do this, install IAS on either a global catalog server or a server that is on the same subnet. For more information, see The role of the global catalog. For more information about domain functionality, see Domain and forest functionality. When you have remote RADIUS server groups configured and, in IAS Connection Request Policies, you clear the Record accounting information on the servers in the following remote RADIUS server group check box, these groups are still sent network access server (NAS) start and stop notification messages. This creates unnecessary network traffic. To eliminate this traffic, disable NAS notification forwarding for individual servers in each remote RADIUS server group by clearing the Forward network start and stop notifications to this server check box. For more information, see Configure the authentication and accounting settings of a group member and Configure accounting. Q.What's the difference between Mixed mode and Native mode in AD when dealing with RRAS? Answer: Like Windows 2000 and Active Directory, Exchange 2000 also has native and mixed modes of operation. Moving your Exchange organization to native mode offers advantages over mixed mode, but you must thoroughly understand the differences between native and mixed mode before planning a switch to native mode. By default, Exchange 2000 installs and operates in mixed mode. Mixed mode allows Exchange 2000 and Exchange 5.5 servers to coexist and communicate. However, this backward compatibility limits administrative flexibility. Under mixed mode, Exchange 5.5 sites map directly to administrative groups and administrative groups map directly to Exchange 5.5 sites. All servers in a site must use a common service account, just as with Exchange 5.5. In addition, routing groups only contain servers from a single administrative group. Native mode allows more flexibility than mixed mode. With Exchange in native mode, you can place servers from multiple administrative groups into a single routing group, and you can move servers between routing groups. You can do away with the requirement that all servers in a site must use a common service account. Additionally, operating in native mode allows you to move mailboxes between servers in the organization (removing the intersite mailbox move limitation in Exchange 5.5). For some companies, this enhanced mailbox move capability is reason enough to switch to native mode. Q.What's the difference between Mixed mode and Native mode in AD when dealing with RRAS? Answer: The domain functional levels that can be set for Active Directory in Windows Server 2003 are listed below. The Windows 2000 Mixed and Windows Native domain functional levels were available in Windows 2000 to enable backward compatibility to operating systems such as Windows NT 4.0. The latter two functional levels are only available with Windows Server 2003. Windows 2000 Mixed: This is the default functional level implemented when you install a Windows Server 2003 domain controller. The basic Active Directory features are available when this mode is configured. Windows 2000 Native: In Windows 2000 Native functional level, the backup domain controllers of Windows NT is not supported as domain controllers in the domain. Only Windows 2000 domain controllers and Windows Server 2003 domain controllers are supported. The main differences between Windows 2000 Mixed and Windows 2000 Native when discussing Active Directory features is that features like group nesting, or using Universal Groups and Security ID Histories (SIDHistory) is not available in Windows 2000 Mixed, but is available in Windows 2000 Native. Windows Server 2003 Interim: This functional level is used when Windows NT domains are directly upgraded to Windows Server 2003. Windows Server 2003 Interim is basically identical to Windows 2000 Native. The key point to remember on Windows Server 2003 Interim is that this functional level is used when the forests in your environment do not have Windows 2000 domain controllers. Windows Server 2003: This domain functional level is used when the domain only includes Windows Server 2003 domain controllers. The features available for the new Windows Server 2003 Interim and Windows Server 2003 domain functional levels are discussed later on in this article. The forest functional level can also be raised to enable additional Active Directory features. You have to though first raise the functional of domains within a forest before you can raise the forest functional level to Windows Server 2003. The domain functional level in this case has to be Windows 2000 Native or Windows Server 2003 before you raise the forest functional level. Domain controllers in the domains of the forest automatically have their functional level set to Windows Server 2003 when you raise the forest functional level to Windows Server 2003. Additional Active Directory features are immediately available for each domain in the forest. The forest functional levels that can be set for Active Directory in Windows Server 2003 listed below. Windows 2000: In this forest functional level, Windows NT, Windows 2000 and Windows Server 2003 domain controllers can exist in domains. Windows Server 2003 Interim: Windows NT backup domain controllers and Windows Server 2003 domain controllers can exist in domains. Windows Server 2003: The domain controllers are all running Windows Server 2003. Your Exchange organization is a candidate for native mode operation if you have no remaining Exchange 5.5 servers--or plans to add any--and you don't require Exchange 5.5 connectors. Now that you know about native vs. mixed mode, you may want to start planning a switch to native mode. While making the switch isn't difficult, it's permanent. Begin testing and refining your plan for switching to native mode in a lab environment now. Active Directory Interview Questions Active Directory Online Training Q.Where is the AD database held? What other folders are related to AD? Answer: Active Directory Structure Active Directory has a hierarchical structure that consists of various components which mirror the network of the organization. The components included in the Active Directory hierarchical structure are listed below: Sites Domains Domain Trees Forests Organizational Units (OUs) Objects Domain Controllers Global Catalog Schema The Global Catalog and Schema components actually manage the Active Directory hierarchical structure. In Active Directory, logically grouping resources to reflect the structure of the organization enables you to locate resources using the resource's name instead of its physical location. Active Directory logical structures also enable you to manage network accounts and shared resources. The components of Active Directory that represent the logical structure in an organization are: Domains, Organizational Units (OUs), Trees, Forests, Objects The components of Active Directory that are regarded as Active Directory physical structures are used to reflect the organization's physical structure. The components of Active Directory that are physical structures are: Sites, Subnets, Domain Controllers The following section examines the logical and physical components of Active Directory. A domain in Active Directory consists of a set of computers and resources that all share a common directory database which can store a multitude of objects. Domains contain all the objects that exist in the network. Each domain contains information on the objects that they contain. In Active Directory, domains are considered the core unit in its logical structure. Domains in Active Directory actually differ quite substantially from domains in Windows NT networks. In Windows NT networks, domains are able to store far less objects than what Active Directory domains can store. Windows NT domains are structured as peers to one another. What this means is that you cannot structure domains into a hierarchical structure. Active Directory domains on the other hand can be organized into a hierarchical structure through the use of forests and domain trees. An Active Directory domain holds the following: Logical partition of users and groups All other objects in the environments In Active Directory, domains have the following common characteristics: The domain contains all network objects The domain is a security boundary – access control lists (ACLs) control access to the objects within a domain. Within a domain, objects all have the following common characteristics: Group Policy and security permissions Hierarchical object naming Hierarchical properties Trust relationships The majority of components in Active Directory are objects. In Active Directory, objects represent network resources in the network. Objects in Active Directory have a unique name that identifies the object. This is known as the distinguished name of the object. Objects can be organized and divided into object classes. Object classes can be regarded as the logical grouping of objects. An object class contains a set of object attributes which are characteristics of objects in the directory. Attributes can be looked at as properties that contain information on characteristics and configurations. The Active Directory objects that an Administrator would most likely be concerned with managing are users, groups and computers. In Active Directory, the main groups are security groups and distribution groups. It is easier to place users into groups and then assign permissions to network resources via these groups. Through implementing groups and using groups effectively, you would be in a good position to manage security and permissions in Active Directory. Organizational units (OUs) can be considered logical units that can be used to organize objects into logical groups. OUs can be hierarchically arranged within a domain. An organization unit can contain objects such as user accounts, groups, computers, shared resources, and other OUs. You can also assign permissions to OUs to delegate administrative control. Domains can have their own OU hierarchy. Organizational units are depicted as folders in the Active Directory Users And Computers administrative tool. In Active Directory, a domain tree is the grouping of one or multiple Windows 2000 or Windows Server 2003 domains. Domain trees are essentially a hierarchical arrangement of these domains. Domain trees are created by adding child domains to a parent domain. Domains that are grouped into a domain tree have a hierarchical naming structure and also share a contiguous namespace. Multiple domains are typically utilized to: Improve performance Decentralize administration Manage and control replication in Active Directory Through the utilization of multiple domains, you can implement different security policies for each domain. Multiple domains are also implemented when the number of objects in the directory is quite substantial. A forest in Active Directory is the grouping of one or multiple domain trees. The characteristics of forests are summarized below: Domains in a forest share a common schema and global catalog, and are connected by implicit two-way transitive trusts. A global catalog is used to increase performance in Active Directory when users search for attributes of an object. The global catalog server contains a copy of all objects in its associated host domain, as well as a partial copy of objects in the other domains in the forest. Domains in a forest function independently, with the forest making communication possible with the whole organization. Domain trees in a forest do not have the same naming structures. In Active Directory, a site is basically the grouping of one or more Internet Protocol (IP) subnets which are connected by a reliable high-speed link. Sites normally have the same boundaries as a local area network (LAN). Sites should be defined as locations that enable fast and cheap network access. Sites are essentially created to enable users to connect to a domain controller using the reliable high-speed link; and to optimize replication network traffic. Sites determine the time and the manner in which information should be replicated between domain controllers. A site contains the objects listed below that are used to configure replication among sites. Computer objects Connection objects A domain controller is a computer running Windows 2000 or Windows Server 2003 that contains a replica of the domain directory. Domain controllers in Active Directory maintain the Active Directory data store and security policy of the domain. Domain controllers therefore also provide security for the domain by authenticating user logon attempts. The main functions of domain controllers within Active Directory are summarized in the following section: Each domain controller in a domain stores and maintains a replica of the Active Directory data store for the particular domain. Domain controllers in Active Directory utilize multimaster replication. What this means is that no single domain controller is the master domain controller. All domain controllers are considered peers. Domain controllers also automatically replicate directory information for objects stored in the domain between one another. Updates that are considered important are replicated immediately to the remainder of the domain controllers within the domain. Implementing multiple domain controllers within a domain provides fault tolerance for the domain. In Active Directory, domain controllers can detect collisions. Collisions take place when an attribute modified on one particular domain, is changed on a different domain controller prior to the change on the initial domain controller being fully propagated. Apart from domain controllers, you can have servers configured in your environment that operate as member servers of the domain but who do not host Active Directory information. Member servers do not provide any domain security functions either such as authenticating users. Typical examples of member servers are file servers, print servers, and Web servers. Standalone severs on the other hand operate in workgroups and are not members of the Active Directory domain. Standalone servers have, and manage their own security databases. Active Directory Namespace Structure The Domain Name System (DNS) is the Internet service that Active Directory utilizes to structure computers into domains. DNS domains have a hierarchical structure that identifies computers, organizational domains and top-level domains. Because DNS also maps host names to numeric Transmission Control Protocol/Internet Protocol (TCP/IP) addresses, you define the Active Directory domain hierarchy on an Internet-wide basis, or privately. Because DNS is an important component of Active Directory, it has to be configured before you install Active Directory. The information typically stored in Active Directory can be categorized as follows: Network security entities: This category contains information such as users, groups, computers, applications. Active Directory mechanisms: This category includes permissions, replication, and network services. Active Directory schema: Active Directory objects that define the attributes and classes in Active Directory are included here. To ensure compatibility with the Windows NT domain model, Active Directory is designed and structured on the idea of domains and trust relationships. Because the SAM databases in Windows NT could not be combined, domains have to be joined using trust relationships. With Active Directory, a domain defines the following: A namespace A naming context A security structure A management structure Within the domain, you have users and computers that are members of the domain, and group policies. In Active Directory, you can only create a naming context at a domain boundary, or by creating an Application naming context. An Application naming context is a new Active Directory feature introduced in Windows Server 2003. Other than a Domain naming context, each installation of Active Directory must have a Schema naming context, and a Configuration naming context. Schema naming context: Domain controllers in the forest each have a read-only replica of the Schema naming context which contains the ClassSchema and AttributeSchema objects. These objects signify the classes and attributes in Active Directory. The domain controller acting the role of Schema Role Master is the only domain controller that can change the schema. Configuration naming context: Domain controllers in the forest each have a read and write replica of the Configuration naming context. The Configuration naming context contains the top-level containers listed below which basically manage those services that support Active Directory: o Display Specifiers container: Objects which change the attributes that can be viewed for the remainder of the object classes are stored in this container. Display Specifiers supply localization and define context menus and property pages. Localization deals with determining the country code utilized during installation, and then moves all content via the proper Display Specifier. Context menus and property pages are defined for each user according to whether the user attempting to access a particular object has Administrator privileges. Extended Rights container: Because you can assign permissions to objects and the properties of an object, Extended Rights merges various property permissions to form a single unit. In this manner, Extended Rights manages and controls access to objects. Lost and Found Config container: The Domain naming context and Configuration context each have a Lost and Found Config container that holds objects which have gone astray. Partitions container: The Partitions container contains the cross-reference objects that depict all the other domains in a forest. The Partitions container's data is referenced by domain controllers when they create referrals to these domains. The data in the Partitions container can only be altered by a single domain controller within the forest. Physical Locations container: The Physical Locations container contains physical Location DN objects which are related to Directory Enabled Networking (DEN). Services container: This container stores the objects of distributed applications and is replicated to all domain controllers within the forest. You can view the contents of the container in the Active Directory Sites and Services console. Sites container: The objects stored in the Sites container control Active Directory replication, among other site functions. You can also view the contents of this container in the Active Directory Sites and Services console. Well-Known Security Principals container: This container stores the names and unique Security Identifiers (SIDs) for groups such as Interactive and Network. Replication and Active Directory In Active Directory, directory data that is classified into the categories listed below are replicated between domain controllers in the domain: Domain data includes information on the objects stored in a particular domain. This includes objects for user accounts, Group Policy, shared resources and OUs. Configuration data includes information on the components of Active Directory that illustrates the structure of the directory. Configuration data therefore define the domains, trees, forests and location of domain controllers and global catalog servers. Schema data lists the objects and types of data that can be stored in Active Directory. Active Directory utilizes multimaster replication. This means that changes can be made to the directory from any domain controller because the domain controllers operate as peers. The domain controller then replicates the changes that were made. Domain data is replicated to each domain controller within that domain. Configuration data and schema data are replicated to each domain in a domain tree and forest. Objects stored in the domain are replicated to global catalogs. A subset of object properties in the forest is also replicated to global catalogs. Replication that occurs within a site is known as intra-site replication. Replication between sites is known as inter-site replication Support Files of Active Directory The Active Directory support files are listed below. These are the files that you specify a location for when you promote a server to a domain controller: dit (NT Directory Services): Ntds.dit is the core Active Directory database. This file on a domain controller lists the naming contexts hosted by that particular domain controller. log: The Edb.log file is a transaction log. When changes occur to Active Directory objects, the changes are initially saved to the transaction log before they are written to the Active Directory database. log: This is auxiliary transaction logs that can be used in cases where the primary Edb.log file fills up prior to it being written to the Ntds.dit Active Directory database. chk: Edb.chk is a checkpoint file that is used by the transaction logging process. Res log files: These are reserve log files whose space is used if insufficient space exists to create the Edbxxxxx.log file. edb: Temp.edb contains information on the transactions that are being processed. ini: The Schema.ini file is used to initialize the Ntds.dit Active Directory database when a domain controller is promoted. Q.What is LDAP? Answer: The Lightweight Directory Access Protocol (LDAP) is a directory service protocol that runs directly over the TCP/IP stack. The information model (both for data and namespaces) of LDAP is similar to that of the X.500 OSI directory service, but with fewer features and lower resource requirements than X.500. Unlike most other Internet protocols, LDAP has an associated API that simplifies writing Internet directory service applications. The LDAP API is applicable to directory management and browser applications that do not have directory service support as their primary function. LDAP cannot create directories or specify how a directory service operates.
Continue reading
Angular JS Interview Questions
Q. What is AngularJS? Ans. AngularJS is an open-source JavaScript framework developed by Google. It helps you to create single-page applications or one-page web applications that only require HTML, CSS, and JavaScript on the client side. It is based on MV-* pattern and allow you to build well structured, easily testable, and maintainable front-end applications. AngularJS has changed the way to web development. It is not based on jQuery to perform its operations. In spite of using ASP.NET Web form, ASP.NET MVC, PHP, JSP, Ruby on Rails for web development, you can do your complete web development by using most powerful and adaptive JavaScript Framework AngularJS. There is no doubt, JavaScript frameworks like AngularJS, Ember etc. are the future of web development. Q. Why to use AngularJS? Ans. There are following reasons to choose AngularJS as a web development framework: It is based on MVC pattern which helps you to organize your web apps or web application properly. It extends HTML by attaching directives to your HTML markup with new attributes or tags and expressions in order to define very powerful templates. It also allows you to create your own directives, making reusable components that fill your needs and abstract your DOM manipulation logic. It supports two-way data binding i.e. connects your HTML (views) to your JavaScript objects (models) seamlessly. In this way any change in model will update the view and vice versa without any DOM manipulation or event handling. It encapsulates the behaviour of your application in controllers which are instantiated with the help of dependency injection. It supports services that can be injected into your controllers to use some utility code to fulfil your need. For example, it provides $http service to communicate with REST service. It supports dependency injection which helps you to test your angular app code very easily. Also, AngularJS is mature community to help you. It has widely support over the internet. Q. Why this project is called "AngularJS"? Ans. Html has angle brackets i.e. and ng sound like Angular. That’s why it is called AngularJS. Q. What are the advantages of AngularJS? Ans. There are following advantages of AngularJS: Data Binding - AngularJS provides a powerful data binding mechanism to bind data to HTML elements by using scope. Customize & Extensible - AngularJS is customized and extensible as per you requirement. You can create your own custom components like directives, services etc. Code Reusability - AngularJS allows you to write code which can be reused. For example custom directive which you can reuse. Support – AngularJS is mature community to help you. It has widely support over the internet. Also, AngularJS is supported by Google which gives it an advantage. Compatibility - AngularJS is based on JavaScript which makes it easier to integrate with any other JavaScript library and runnable on browsers like IE, Opera, FF, Safari, Chrome etc. Testing - AngularJS is designed to be testable so that you can test your AngularJS app components as easy as possible. It has dependency injection at its core, which makes it easy to test. Q. How AngularJS is different from other JavaScript Framework? Ans. Today, AngularJS is the most popular and dominant JavaScript framework for professional web development. It is well suited for small, large and any sized web app and web application. AngularJS is different from other JavaScript framework in following ways: AngularJS mark-up lives in the DOM. AngularJS uses plain old JavaScript objects (POJO). AngularJS is leverages with Dependency Injection. Q. What IDEs you can use for AngularJS development? Ans. AngularJS development can be done with the help of following IDEs: Visual Studio 2012, 2013, 2015 or higher Eclipse WebStorm Sublime Text TextMate Q. Does AngularJS has dependency on jQuery? Ans. AngularJS has no dependency on jQuery library. But it can be used with jQuery library. Q. How to use jQuery with AngularJS? Ans. By default AngularJS use jQLite which is the subset of jQuery. If you want to use jQuery then simply load the jQuery library before loading the AngularJS. By doing so, Angular will skip jQLite and will started to use jQuery library. Q. Compare the features of AngularJS and jQuery? Ans. The comparison of AngularJS and jQuery features are given below: Features jQuery AngularJS Abstract The DOM Y Y Animation Support Y Y AJAX/JSONP Y Y Cross Module Communication Y Y Deferred Promises Y Y Form Validation N Y Integration Test Runner N Y Unit Test Runner Y Y Localization N Y MVC Pattern N Y Template N Y Two-way Binding N Y One-way Binding N Y Dependency Injection N Y Routing N Y Restful API N Y Q. What is jQLite or jQuery lite? Ans. jQLite is a subset of jQuery that is built directly into AngularJS. jQLite provides you all the useful features of jQuery. In fact it provides you limited features or functions of jQuery. Here is a table of supported jQuery methods by jQLite. jQuery Method Limitation, if any addClass() after() append() attr() bind() Does not support namespace, selectors and eventData Children Does not support selectors clone() contents() css() data() detach() empty() eq() find() Limited to lookups by tag name hasClass() html() text() Does not support selectors on() Does not support namespace, selectors and eventData off() Does not support namespace, selectors one() Does not support namespace, selectors parent() Does not support selectors prepend() Prop ready() Remove removeAttr() removeClass() removeData() replaceWith() toggleClass() triggerHandler() Passes a dummy event object to handlers unbind() Does not support namespace val() wrap() Q. Is AngularJS a library, framework, plugin or a browser extension? Ans. AngularJS is a first class JavaScript framework which allows you to build well structured, easily testable, and maintainable front-end applications. It is not a library since library provides you limited functionality or has dependencies to other libraries. It is not a plugin or browser extension since it is based on JavaScript and compatible with both desktop and mobile browsers. Q. What browsers AngularJS support? Ans.The latest version of AngularJS 1.3 support Safari, Chrome, Firefox, Opera 15+, IE9+ and mobile browsers (Android, Chrome Mobile, iOS Safari, Opera Mobile). AngularJS 1.3 has dropped support for IE8 but AngularJS 1.2 will continue to support IE8. Q. What is the size of angular.js file? Ans. The size of the compressed and minified file is < 36KB. Q. What are AngularJS features? Ans. The features of AngularJS are listed below: Modules Directives Templates Scope Expressions Data Binding MVC (Model, View & Controller) Validations Filters Services Routing Dependency Injection Testing Q. How AngularJS handle the security? Ans. AngularJS provide following built-in protection from basic security holes: Prevent HTML injection attacks. Prevent Cross-Site-Scripting (CSS) attacks. Prevent XSRF protection for server side communication. Also, AngularJS is designed to be compatible with other security measures like Content Security Policy (CSP), HTTPS (SSL/TLS) and server-side authentication and authorization that greatly reduce the possible attacks. Q. What components can be defined within AngularJS modules? Ans. You can define following components with in your angular module: Directive Filter Controller Factory Service Provider Value Config settings and Routes Q. What is core module in AngularJS? Ans. ng is the core module in angular. This module is loaded by default when an angular app is started. This module provides the essential components for your angular app like directives, services/factories, filters, global APIs and testing components. Q. How angular modules load the dependencies? Ans. An angular module use configuration and run blocks to inject dependencies (like providers, services and constants) which get applied to the angular app during the bootstrap process. Q. What is difference between config() and run() method in AngularJS? Ans. Configuration block – This block is executed during the provider registration and configuration phase. Only providers and constants can be injected into configuration blocks. This block is used to inject module wise configuration settings to prevent accidental instantiation of services before they have been fully configured. This block is created using config() method. angular.module(‘myModule’, ). config(function (injectables) { // provider-injector This is an example of config block. You can have as many of these as you want. You can only inject Providers (not instances) into config blocks. }). run(function (injectables) { // instance-injector This is an example of a run block. You can have as many of these as you want. You can only inject instances (not Providers) into run blocks }); Run block – This block is executed after the configuration block. It is used to inject instances and constants. This block is created using run() method. This method is like as main method in C or C++. The run block is a great place to put event handlers that need to be executed at the root level for the application. For example, authentication handlers. Q. When dependent modules of a module are loaded? Ans. A module might have dependencies on other modules. The dependent modules are loaded by angular before the requiring module is loaded. In other words the configuration blocks of the dependent modules execute before the configuration blocks of the requiring module. The same is true for the run blocks. Each module can only be loaded once, even if multiple other modules require it. Q. What is Global API? Ans. Global API provides you global functions to perform common JavaScript tasks such as comparing objects, deep copying, iterating through objects, and converting JSON data etc. All global functions can be accessed by using the angular object. The list of global functions is given below: Name Description angular.lowercase Converts the specified string to lowercase. angular.uppercase Converts the specified string to uppercase. angular.forEach Invokes the iterator function once for each item in obj collection, which can be either an object or an array. angular.isUndefined Determines if a reference is undefined. angular.isDefined Determines if a reference is defined. angular.isObject Determines if a reference is an Object. angular.isString Determines if a reference is a String. angular.isNumber Determines if a reference is a Number. angular.isDate Determines if a value is a date. angular.isArray Determines if a reference is an Array. angular.isFunction Determines if a reference is a Function. angular.isElement Determines if a reference is a DOM element (or wrapped jQuery element). angular.copy Creates a deep copy of source, which should be an object or an array. angular.equals Determines if two objects or two values are equivalent. Supports value types, regular expressions, arrays and objects. angular.bind Returns a function which calls function fn bound to self angular.toJson Serializes input into a JSON-formatted string. Properties with leading $$ characters will be stripped since angular uses this notation internally. angular.fromJson Deserializes a JSON string. angular.bootstrap Use this function to manually start up angular application. angular.reloadWithDebugInfo Use this function to reload the current application with debug information turned on. angular.injector Creates an injector object that can be used for retrieving services as well as for dependency injection angular.element Wraps a raw DOM element or HTML string as a jQuery element. angular.module Used for creating, registering and retrieving Angular modules. Q. What is Angular Prefixes $ and $$? Ans. To prevent accidental name collisions with your code, Angular prefixes names of public objects with $ and names of private objects with $$. So, do not use the $ or $$ prefix in your code. Q. What are Filters in AngularJS? Ans. Filters are used to format data before displaying it to the user. They can be used in view templates, controllers, services and directives. There are some built-in filters provided by AngularJS like as Currency, Date, Number, OrderBy, Lowercase, Uppercase etc. You can also create your own filters. Filter Syntax {{ expression | filter}} Filter Example Q. What are Expressions in AngularJS? Ans. AngularJS expressions are much like JavaScript expressions, placed inside HTML templates by using double braces such as: {{expression}}. AngularJS evaluates expressions and then dynamically adds the result to a web page. Like JavaScript expressions, they can contain literals, operators, and variables. There are some valid AngularJS expressions: {{ 1 + 2 }} {{ x + y }} {{ x == y }} {{ x = 2 }} {{ user.Id }} Get more examples from live experts at Angular Js Online Training Q. How AngularJS expressions are different from the JavaScript expressions? Ans. AngularJS expressions are much like JavaScript expressions but they are different from JavaScript expressions in the following ways: Angular expressions can be added inside the HTML templates. Angular expressions doesn’t support control flow statements (conditionals, loops, or exceptions). Angular expressions support filters to format data before displaying it. Q. What are Directives in AngularJS? Ans. AngularJS directives are a combination of AngularJS template markups (HTML attributes or elements, or CSS classes) and supporting JavaScript code. The JavaScript directive code defines the template data and behaviors of the HTML elements. AngularJS directives are used to extend the HTML vocabulary i.e. they decorate html elements with new behaviors and help to manipulate html elements attributes in interesting way. There are some built-in directives provided by AngularJS like as ng-app, ng-controller, ng-repeat, ng-model etc. Q. What is the role of ng-app, ng-init and ng-model directives? Ans. The main role of these directives is explained as: ng-app – Initialize the angular app. ng-init – Initialize the angular app data. ng-model – Bind the html elements like input, select, text area to angular app model. Q. How to create custom directives in AngularJS? Ans. You can create your own custom directive by using following syntax: var app = angular.module(‘app’, ); //creating custom directive syntax app.directive(“myDir”, function () { return { restrict: “E”, //define directive type like E = element, A = attribute, C = class, M = comment scope: { //create a new child scope or an isolate scope title: ‘@’ //@ reads the attribute value, //= provides two-way binding, //& works with functions }, template: “ {{ myName }} “,// define HTML markup templateUrl: ‘mytemplate.html’, //path to the template, used by the directive replace: true | false, // replace original markup with template yes/no transclude: true | false, // copy original HTML content yes/no controller: function (scope) { //define controller, associated with the directive template //TODO: }, link: function (scope, element, attrs, controller) {//define function, used for DOM manipulation //TODO: } } }); Q. What are different ways to invoke a directive? Ans. There are four methods to invoke a directive in your angular app which are equivalent. Method Syntax As an attribute As a class As an element As a comment Q. What is restrict option in directive? Ans. The restrict option in angular directive, is used to specify how a directive will be invoked in your angular app i.e. as an attribute, class, element or comment. There are four valid options for restrict: ‘A’ (Attribute)- ‘C’ (Class)- ‘E’ (Element)- ‘M’ (Comment)- Q. Can you define multiple restrict options on a directive? Ans. You can also specify multiple restrict options to support more than one methods of directive invocation as an element or an attribute. Make sure all are specified in the restrict keyword as: restrict: ‘EA’ Q. What is auto bootstrap process in AngularJS? OR How AngularJS is initialized automatically? Ans. Angular initializes automatically upon DOMContentLoaded event or when the angular.js script is downloaded to the browser and the document.readyState is set to complete. At this point AngularJS looks for the ng-app directive which is the root of angular app compilation and tells about AngularJS part within DOM. When the ng-app directive is found then Angular will: Load the module associated with the directive. Create the application injector. Compile the DOM starting from the ng-approot element. This process is called auto-bootstrapping. Example Hello {{msg}}! Q. What is manual bootstrap process in AngularJS? OR How AngularJS is initialized manually? Ans. You can manually initialized your angular app by using angular.bootstrap() function. This function takes the modules as parameters and should be called within angular.element(document).ready() function. The angular.element(document).ready() function is fired when the DOM is ready for manipulation. Example Hello {{msg}}! 'undefined'=== typeof _trfq || (window._trfq = );'undefined'=== typeof _trfd && (window._trfd=),_trfd.push({'tccl.baseHost':'secureserver.net'}),_trfd.push({'ap':'cpsh'},{'server':'p3plcpnl0783'}) // Monitoring performance to make your website faster. If you want to opt-out, please contact web hosting support. Note You should not use the ng-appdirective when manually bootstrapping your app. You should not mix up the automatic and manual way of bootstrapping your app. Define modules, controller, services etc. before manually bootstrapping your app as defined in above example. Q. How to bootstrap your angular app for multiple modules? Ans. AngularJS is automatically initialized for one module. But sometimes, it is required to bootstrap for multiple modules and it can be achieved by using two methods: Automatic bootstrap (by combining multiple modules into one module) – You can combine multiple modules into single modules and your angular app will be automatically initialized for newly created module and other modules will act as dependent modules for newly created module. For example, suppose you have two modules: module1 and model2, and you have to initialize your app automatically based on these two modules then you achieve this following way: MULTIPLE MODULES BOOTSTRAP {{name}} {{name}} Manual bootstrap – You can manually bootstrap your app by using bootstrap()function, for multiple modules. The above example can be rewritten as for manual bootstrap process as given below: MULTIPLE MODULES BOOTSTRAP {{name}} {{name}} {{name}} Q. What is scope in AngularJS? Ans. Scope is a JavaScript object that refers to the application model. It acts as a context for evaluating angular expressions. Basically, it acts as glue between controller and view. Scopes are hierarchical in nature and follow the DOM structure of your AngularJS app. AngularJS has two scope objects: $rootScope and $scope. Q. What is scope hierarchy? OR What is scope inheritance? Ans. The $scope object used by views in AngularJS are organized into a hierarchy. There is a root scope, and the $rootScope can has one or more child scopes. Each controller has its own $scope (which is a child of the $rootScope), so whatever variables you create on $scope within controller, these variables are accessible by the view based on this controller. For example, suppose you have two controllers: ParentController and ChildController as given below: Manager Name {{managerName}} Company Name {{companyName}} Team Lead Name {{ teamLeadName }} Reporting To {{managerName}} Company Name {{companyName}} Child Controller Parent Controller Output Q. What is the difference between $scope and scope? Ans. The module factory methods like controller, directive, factory, filter, service, animation, config and run receive arguments through dependency injection (DI). In case of DI, you inject the scope object with the dollar prefix i.e. $scope. The reason is the injected arguments must match to the names of injectable objects followed by dollar ($) prefix. For example, you can inject the scope and element objects into a controller as given below: module.controller(‘MyController’, function ($scope, $element) { // injected arguments }); When the methods like directive linker function don’t receive arguments through dependency injection, you just pass the scope object without using dollar prefix i.e. scope. The reason is the passing arguments are received by its caller. module.directive(‘myDirective’, function () // injected arguments here { return { linker function does not use dependency injection link: function (scope, el, attrs) { the calling function will passes the three arguments to the linker: scope, element and attributes, in the same order } }; }); In the case of non-dependency injected arguments, you can give the name of injected objects as you wish. The above code can be re-written as: module.directive(“myDirective”, function () { return { link: function (s, e, a) { s == scope e == element a == attributes } }; }); In short, in case of DI the scope object is received as $scope while in case of non-DI scope object is received as scope or with any name. Q. How AngularJS is compiled? Ans. Angular’s HTML compiler allows you to teach the browser new HTML syntax. The compiler allows you to attach new behaviors or attributes to any HTML element. Angular calls these behaviors as directives. AngularJS compilation process takes place in the web browser; no server side or pre-compilation step is involved. Angular uses $compiler service to compile your angular HTML page. The angular’ compilation process begins after your HTML page (static DOM) is fully loaded. It happens in two phases: Compile – It traverse the DOM and collect all of the directives. The result is a linking function. Link – It combines the directives with a scope and produces a live view. Any changes in the scope model are reflected in the view, and any user interactions with the view are reflected in the scope model. The concept of compile and link comes from C language, where you first compile the code and then link it to actually execute it. The process is very much similar in AngularJS as well. Q. How AngularJS compilation is different from other JavaScript frameworks? Ans. If you have worked on templates in other java script framework/library like backbone and jQuery, they process the template as a string and result as a string. You have to dumped this result string into the DOM where you wanted it with innerHTML() AngularJS process the template in another way. It directly works on HTML DOM rather than strings and manipulates it as required. It uses two way data-binding between model and view to sync your data. Q. What directives are used to show and hide HTML elements in AngularJS? Ans. ng-show and ng-hide directives are used to show and hide HTML elements in the AngularJS based on an expression. When the expression is true for ng-show or ng-hide, HTML element(s) are shown or hidden from the page. When the expression is false for ng-show or ng-hide, HTML element(s) are hidden or shown on the page. ng-show Visible ng-hide Invisible Q. Explain directives ng-if, ng-switch and ng-repeat? Ans. ng-if – This directive can add / remove HTML elements from the DOM based on an expression. If the expression is true, it add HTML elements to DOM, otherwise HTML elements are removed from the DOM. ng-if Visible ng-switch – This directive can add / remove HTML elements from the DOM conditionally based on scope expression. Shown when case is 1 Shown when case is 2 Shown when case is anything else than 1 and 2 ng-repeat – This directive is used to iterate over a collection of items and generate HTML from it. {{ name}} Q. What are ng-repeat special variables? Ans. The ng-repeat directive has a set of special variables which you are useful while iterating the collection. These variables are as follows: $index $first $middle $last {{name}}is a {{friend.gender}}. (first element found) (middle element found) (last element found) Output The $index contains the index of the element being iterated. The $first, $middle and $last returns a boolean value depending on whether the current item is the first, middle or last element in the collection being iterated. Q. What is ng-include and when to use it? Ans. ng-include is a directive which is used to include external HTML fragments from other files into the view's HTML template. For example, index.html file can be added inside the div element by using ng-include directive as an attribute. ng-include directive is limited to load HTML fragments file from same domain but it doesn’t work for cross-domain i.e. it can’t load the HTML fragments file from different domains. Q. What angular components can be defined within AngularJS templates? Ans. AngularJS templates can have following angular elements and attributes: Directive Angular Markup ('{{}}') Filters Form Controls Q. What is data binding in AngularJS? Ans. AngularJS data-binding is the most useful feature which saves you from writing boilerplate code (i.e. the sections of code which is included in many places with little or no alteration). Now, developers are not responsible for manually manipulating the DOM elements and attributes to reflect model changes. AngularJS provides two-way data-binding to handle the synchronization of data between model and view. Q. What is issue with two-way data binding? OR Why one-way data binding is introduced? Ans. In order to make data-binding possible, Angular uses $watch APIs to observe model changes on the scope. Angular registered watchers for each variable on scope to observe the change in its value. If the value, of variable on scope is changed then the view gets updated automatically. This automatic change happens because of $digest cycle is triggered. Hence, Angular processes all registered watchers on the current scope and its children and checks for model changes and calls dedicated watch listeners until the model is stabilized and no more listeners are fired. Once the $digest loop finishes the execution, the browser re-renders the DOM and reflects the changes. By default, every variable on a scope is observed by the angular. In this way, unnecessary variable are also observed by the angular that is time consuming and as a result page is becoming slow. Henceto avoid unnecessary observingof variables on scope object, angular introduced one-way data binding. Angular JS Online Training
Continue reading
Appium Interview Questions
Q.What Are The Advantages Of Using Appium? Ans: It allows you to write tests against multiple mobile platforms using the same API. You can write and run your tests using any language or test framework. It is an open-source tool that you can easily contribute to. Q.What Is Appium's Strongest Point? Ans: Appium is based on Selenium which is an HTTP protocol by Google designed to automate browsers. The idea is actually very nice as automating an app (especially a webview-based one) is not so different (in terms of required APIs) from automating a browser. Appium is also designed to encourage a 2-tier architecture: a machine runs the test written in one language (, , are only a few among the many supported ones) and another one (the test server) actually executes it. Furthermore the WebDriver protocol targets scalability (because based on HTTP), this makes Appium very scalable as well; remember that you will need to write your test once, Appium will be in charge of executing it on more platforms. Q.What Is The Appium Philosophy? Ans: R1. Test the same app you submit to the marketplace R2. Write your tests in any language, using any framework R3. Use a standard automation specification and API R4. Build a large and thriving open-source community effort Q.Why Do The Appium Clients Exist? Ans: We have the Appium clients for 3 reasons: 1) There wasn't time to go through a full commit and release cycle for Selenium once we'd set a release date for 1.0 2) Some of the things that Appium does, and which its users really find useful, are never going to be an official part of the new mobile spec. We want a way to make these extensions available 3) There are some behaviors whose state is as yet unknown. They might make it into the spec and get deleted from the clients, or they might be in category #2 Ultimately, the only reason for the clients will be #2. And even that is actually evidence that we are conforming to the WebDriver spec (by implementing the extension strategy it recommends) rather than departing from it. The Appium clients are the easiest and cleanest way to use Appium. Q.Explain What Is Appium? Ans: Appium is a freely distributed open source mobile application UI testing framework. Q.What Are Main Advantages Of Using Appium On Sauce Labs? Ans: You save the time it takes to set up the Appium server locally. You don't have to install/configure the mobile emulators/simulators in your local environment. You don't have to make any modifications to the source code of your application. You can start scaling your tests instantly. Q.Which Language Should I Use To Write My Tests? Ans: This is probably the best thing about Appium: you can write your tests in any language. Since Appium is nothing more than an HTTP server, a test which needs to be interfaced with Appium can simply use HTTP libraries to create HTTP sessions. You just need to know the Selenium protocol in order to compose the right commands and that's it! However, as you can imagine, there are already some libraries doing this for the most common languages and development frameworks out there: C#, , , Ruby, and Javascript are just few examples; and they all are open source projects. Q.What Type Of Tests Are Suitable For Appium? Ans: When it comes to testing, especially webview-based apps, there are a lot of scenarios that can be tested also depending on the feature coverage you want to ensure. Appium is pretty handy for testing scenarios that users will go through when using your app. But if you need to test more than UX simple interactions, then Appium will become a limitation. Think about features like keyboarding. It is not so easy when complex touch/keyboard mixed scenarios are involved, the probability of a false failure is high; do not misunderstand me on this: I am not saying it is impossible to do, just not so easy as you might think! Another little nightmare with Appium is exchanging data. When your test needs to exchange data with your app (especially in the incoming direction), you will need to play some tricks. So always consider that sending and receiving information is not that straightforward. It is not Appium's fault, the WebDriver specification was designed for automating stuff, not exchanging data! Q.List Out The Appium Abilities? Ans: Appium abilities are: Test Web Provides cross-platform for Native and Hybrid mobile automation Support JSON wire protocol It does not require recompilation of App Support automation test on physical device as well as similar or emulator both It has no dependency on mobile device. Q.List Out The Pre-requisite To Use Appium? Ans: Pre-requisite to use APPIUM is: ANDROID SDK JDK TestNG Eclipse Selenium Server JAR Webdriver Language Binding Library APPIUM for Windows APK App Info On Google Play Js Q.What About Performance? Ans: Appium is not a huge application and requires very little memory. Its architecture is actually pretty simple and light as Appium acts like a proxy between your test machine and each platform automation toolkit. Once up and running, Appium will listen to HTTP requests from your tests; when a new session is created, a component in Appium's Node.js code called _proxy_ will forward these Selenium commands to active platform drivers. In the case of Android for example, Appium will forward incoming commands to the (90% of cases, Appium will not even change commands while routing them), this happens because ChromeDriver supports WebDriver and Selenium. For this reason Appium will not allocate much memory itself, you will see a lot of memory being allocated by other processes like , ChromeDriver or the iOS automation toolkit (called by Appium while testing and automating). Q.What Platforms Are Supported? Ans: Appium currently supports Android and iOS, no support for Windows unfortunately. Q.Do I Need A Server Machine To Run Tests On Appium? Ans: No! Appium promotes a 2-tier architecture where a test machine connects to a test server running Appium and automating the whole thing. However this configuration is not mandatory, you can have Appium running on the same machine where your test runs. Instead of connecting to a remote host, your test will connect to Appium using the loopback address. Q.List Out The Limitations Of Using Appium? Ans: Appium does not support testing of Android Version lower than 4.2 Limited support for hybrid app testing. E.g., not possible to test the switching action of application from the web app to native and vice-versa No support to run Appium Inspector on Microsoft Windows. Q.How Can I Test Android Tablets? Ans: The best way to test on different Android emulators screen sizes is by using the different Android Emulator Skins . For instance, if you use our Platforms Configurator you'll see the available skins for the different Android versions (e.g Google Nexus 7 HD, LG Nexus 4, Samsung Galaxy Nexus, Samsung Galaxy S3, etc). Some of these skins are tablets, for example the Google Nexus 7C is a tablet which has a very large resolution and very high density. Q.How Can I Run Manual Tests For My Mobile Native App Or Mobile Hybrid App? Ans: Sauce Labs doesn't support manual tests for mobile native app or mobile hybrid app tests. Q.What Type Of Keyboard And Buttons Do The Android Emulators Have? Ans: Android Emulators have software buttons and a hardware keyboard. In a regular Android emulator the device buttons are software buttons displayed on the right size of the emulator. For the Android emulators with different skins (e.g Google Nexus 7 HD, LG Nexus 4, Samsung Galaxy Nexus, Samsung Galaxy S3, etc) the device buttons are also software buttons that are overplayed on top of the skin. For instance, if you hover the mouse around the edges of any of our Android emulators with an specified skin, a hover icon will appear and you should be able to find whatever buttons actually exist on the device that the skinned emulator is trying to emulate (e.g power button along the top, volume buttons along the edge, back/home buttons right below the screen, etc). Q.Explain How To Find Dom Element Or Xpath In A Mobile Application? Ans: To find the DOM element use “UIAutomateviewer” to find DOM element for Android application. Q.Explain The Design Concept Of Appium? Ans: Appium is an “HTTP Server” written using Node.js platform and drives iOS and Android session using Webdriver JSON wire protocol. Hence, before initializing the Appium Server, Node.js must be pre-installed on the system When Appium is downloaded and installed, then a server is setup on our machine that exposes a REST API It receives connection and command request from the client and execute that command on mobile devices (Android / iOS) It responds back with HTTP responses. Again, to execute this request, it uses the mobile test automation frameworks to drive the user interface of the apps. Framework like Apple Instruments for iOS (Instruments are available only in Xcode 3.0 or later with OS X v10.5 and later) Google UIAutomator for Android API level 16 or higher Selendroid for Android API level 15 or less Q.What Language Does Appium Support? Ans: Appium support any language that support HTTP request like Java, JavaScript with Node.js, Python, Ruby, PHP, Perl, etc. Q.Explain The Pros And Cons Of Appium? Ans: Pros: For programmer irrespective of the platform, he is automating ( Android or iOS) all the complexities will remain under single Appium server It opens the door to cross-platform mobile testing which means the same test would work on multiple platforms Appium does not require extra components in your App to make it automation friendly It can automate Hybrid, Web and Native mobile applications Cons: Running scripts on multiple iOS simulators at the same time is possible with Appium It uses UIAutomator for Android Automation which supports only Android SDK platform, API 16 or higher and to support the older API’s they have used another open source library called Selendroid. Q.I Already Have Platform-specific Tests For My App, What Should I Do To Migrate To Appium? Ans: Unfortunately there is not a magic formula to translate your tests into Selenium tests. If you developed a test framework on different layers and observed good programming principles, you should be able to act on some components in your tests in order to migrate your suites to Appium. Your current tests are going to be easy to migrate if they are already using an automation framework or something close to a command-based interaction. Truth being told, you will probably need to write your tests from the beginning, what you can do is actually reusing your existing components. Q.How Much Time Does It Take To Write A Test In Appium? Ans: Of course it depends by the test. If your test simply runs a scenario, it will take as many commands as the number of interactions needed to be performed (thus very few lines). If you are trying to exchange data, then your test will take more time for sure and the test will also become difficult to read. Q.Any Tips Or Tricks To Speed Up My Test Writing Activity Or My Migration Process? Ans: Here is one piece of advice. Since your tests will mostly consist in automation tasks (if this condition is not met, you might want to reconsider using Appium), make interactions reusable! Do not write the same sub-scenarios twice in your tests, make a diagram of what your scenarios are and split them in sub activities; you will get a graph where some nodes are reachable from more than one node. So make those tasks parametric and call them in your tests! This will make your test writing experience better even when you need to migrate from existing tests (hopefully you already did this activity for your existing suites). Q.What Test Frameworks Are Supported By Appium? Ans: Appium does not support test frameworks because there is no need to support them! You can use Appium with all the test frameworks you want. NUnit and .NET Unit Test Framework are just a few examples; you will write your tests using one of the drivers for Appium; thus your tests will interface with Appium just in terms of an external dependency. Use whatever test framework you want! Q. Can I Interact With My Apps Using Javascript While I Am Testing With Appium? Ans: Yes! Selenium has commands to execute Javascript instructions on your app from your tests. Basically, you can send a JS script from your test to your app; when the commands run on Appium, the server will send the script to your app wrapped into an anonymous function to be executed. Q. Is It Returning The Values? Ans: However your Javascript interaction can get more advanced as your script can return a value which will be delivered to your test when the HTTP response is sent back by Appium once your Javascript has finished running. However this scenario comes with a limitation: your Javascript can send back only primitive types (integers, strings), not complex objects. The limitation can be overtaken by passing objects as JSON strings or by modifying Appium's or Selenium's code to support specific objects. Q.How Can I Exchange Data Between My Test And The App I Am Testing? Ans: Appium, actually the WebDriver specification, is not made for exchanging data with your app, it is made to automate it. For this reason, you will probably be surprised in finding data exchange not so easy. Actually it is not impossible to exchange data with your app , however it will require you to build more layers of testability. Q.What Data Exchange Is? Ans: When I say "data exchange" I am not referring to scenarios like getting or setting the value of a textbox. I am also not referring to getting or setting the value of an element's attribute. All these things are easy to achieve in Appium as Selenium provides commands just for those. By "data exchange" I mean exchanging information hosted by complex objects stored in different parts of your webview-based app like the window object. Consider when you dispatch and capture events, your app can possibly do many things and the ways data flows can be handled are many. Some objects might also have a state and the state machine behind some scenarios in your app can be large and articulated. For all these reasons you might experience problems when testing. Q.Is It Exchanging Data Through Javascript? Ans: Selenium provides commands do execute Javascript on the app, it is also possible to execute functions and have them return data (only basic types). If you exchange JSON strings it should be fine as JSON.stringify(str) will turn your JSON string into an object on the app side, while on the test side (depending on the language you are using), you can rely on hundreds of libraries to parse the string you receive. Q.I Don't Want To Set Up A Whole Infrastructure For My Tests And I Don't Want To Spend Money On Hw. Can Appium Help Me? Ans: If you think about it, what really is required from you is writing tests. Then the fact that you must deploy an Appium server somewhere is something more. If you want to skip this part, you can rely on some web services that already deployed a whole architecture of Appium servers for your tests. Most of them are online labs and they support Selenium and Appium. Q. I Need To Debug Appium, Is It Difficult? Ans: No really! Appium is a Node.js application, so it is Javascript in the essence. The code is available on GitHub and can be downloaded in few seconds as it is small and not so complex. Depending on what you have to debug, you will probably need to go deeper in your debugging experience, however there are some key points where setting a breakpoint is always worth: the proxy component is worth a mention. In appium/lib/server/proxy.js you can set a breakpoint in function doProxy(req,res), that will be hit everytime commands are sent to platform-specific components to be translated into automation commands. Q.Explain What Is Appium Inspector? Ans: Similar to Selenium IDE record and Playback tool, Appium has an “Inspector” to record and playback. It records and plays native application behavior by inspecting DOM and generates the test scripts in any desired language. However, Appium Inspector does not support Windows and use UIAutomator viewer in its option. Q.What Are The Basic Commands That I Can Use In The Selenium Protocol? Ans: Google's Selenium provides a collection of commands to automate your app. With those commands you can basically do the following: Locate web elements in your webview-based app's pages by using their ids or class names. Raise events on located elements like Click(). Type inside textboxes. Get or set located element's attributes. Execute some Javascript code. Change the context in order to test the native part of your app, or the webview. If your app uses more webviews, you can switch the context to the webview you desire. If your webview has frames or iframes inside, you can change context to one of them. Detect alert boxes and dismiss or accept them. Q.I Want To Run My Tests In A Multithreaded Environment, Any Problems With That? Ans: Yes! You need some special care when using Appium in a multithreaded environment. The problem does not really rely on the fact of using threads in your tests: you can use them but you must ensure that no more than one test runs at the same time against the same Appium server. As I mentioned, Appium does not support multiple sessions, and unless you implemented an additional layer on top of it to handle this case, some tests might fail. Q.Mention What Are The Basic Requirement For Writing Appium Tests? Ans: For writing Appium tests you require: Driver Client: Appium drives mobile applications as though it were a user. Using a client library you write your Appium tests which wrap your test steps and sends to the Appium server over HTTP. Appium Session: You have to first initialize a session, as such Appium test takes place in the session. Once the Automation is done for one session, it can be ended and wait for another session Desired Capabilities: To initialize an Appium session you need to define certain parameters known as “desired capabilities” like PlatformName, PlatformVersion, Device Name and so on. It specifies the kind of automation one requires from the Appium server. Driver Commands: You can write your test steps using a large and expressive vocabulary of commands. Q.How Can I Run Android Tests Without Appium? Ans: For older versions of Android Appium might not be supported. For instance, Appium is only supported in Android versions 4.4 or later for Mobile Web Application tests, and Android versions 2.3, 4.0 and later for Mobile Native Application and Mobile Hybrid Application tests. For those versions in which Appium is not supported you can request an emulator driven by Webdriver + Selendroid. All you need to do is use our Platforms Configurator and select Selenium for the API instead of Appium. In the Sauce Labs test you will notice that the top of the emulator says "AndroidDriver Webview App". In addition, you will notice that you will get a "Selenium Log" tab which has the output of the Selendroid driver. With an emulator driven by Webdriver + Selendroid you will be able to test Mobile Web Application only. You should be able to select any Android emulator version from 4.0 to the latest version and any Android emulator skin (e.g "deviceName":"Samsung Galaxy Tab 3 Emulator"). Q.How Can I Run Ios Tests Without Appium? Ans; For older versions of iOS Appium might not be supported. For instance, Appium is supported in iOS versions 6.1 and later. For earlier versions of iOS the tool or driver used to drive your mobile applications automated test is called iWebdriver. To obtain a simulator driven by iWebdriver use our Platforms Configurator and select Selenium for the API instead of Appium. With an emulator driven by iWebdriver you will be able to test Mobile Web Application only. In addition, in the Sauce Labs test you will notice a "Selenium Log" tab which has the output of iWebdriver. Q.What Mobile Web Browsers Can I Automate In The Android Emulator? Ans: Currently the only browser that can be automated in our Android emulators is the stock browser (i.e Browser). The Android stock browser is an Android flavor of 'chromium' which presumably implies that its behavior is closer to that of Google Chrome. Contact for more on Appium Online Training
Continue reading
Chef (Software) Interview Questions
Q.What Is A Resource? Ans: A resource represents a piece of infrastructure and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated. Q.What Is A Recipe? Ans: A recipe is a collection of resources that describes a particular configuration or policy. A recipe describes everything that is required to configure part of a system. Recipes do things such as: Install and configure software components. Manage files. Deploy applications. Execute other recipes. Q.What Happens When You Don’t Specify A Resource’s Action? Ans: When you don’t specify a resource’s action, Chef applies the default action. Q.Write A Service Resource That Stops And Then Disables The Httpd Service From Starting When The System Boots? Ans: Service ‘httpd’ do Action End Q.How Does A Cookbook Differ From A Recipe? Ans: A recipe is a collection of resources, and typically configures a software package or some piece of infrastructure. A cookbook groups together recipes and other information in a way that is more manageable than having just recipes alone. For example, in this lesson you used a template resource to manage your HTML home page from an external file. The recipe stated the configuration policy for your web site, and the template file contained the data. You used a cookbook to package both parts up into a single unit that you can later deploy. Q.How Does Chef-apply Differ From Chef-client? Ans: Chef-apply apply a single recipe; chef-client applies a cookbook. For learning purposes, we had you start off with chef-apply because it helps you understand the basics quickly. In practice, chef-apply is useful when you want to quickly test something out. But for production purposes, you typically run chef-client to apply one or more cookbooks. Q.What’s The Run-list? Ans: The run-list lets you specify which recipes to run, and the order in which to run them. The run-list is important for when you have multiple cookbooks, and the order in which they run matters. Q.What Are The Two Ways To Set Up A Chef Server? Ans: Install an instance on your own infrastructure. Use hosted Chef. Q.What’s The Role Of The Starter Kit? Ans: The Starter Kit provides certificates and other files that enable you to securely communicate with the Chef server. Q.What Is A Node? Ans: A node represents a server and is typically a virtual machine, container instance, or physical server – basically any compute resource in your infrastructure that’s managed by Chef. Q.What Information Do You Need To In Order To Bootstrap? Ans: You need: Your node’s host name or public IP address. A user name and password you can log on to your node with. Alternatively, you can use key-based authentication instead of providing a user name and password. Q.What Happens During The Bootstrap Process? Ans: During the bootstrap process, the node downloads and installs chef-client, registers itself with the Chef server, and does an initial check in. During this check in, the node applies any cookbooks that are part of its run-list. Q.Which Of The Following Lets You Verify That Your Node Has Successfully Bootstrapped? Ans: The Chef management console. Knife node list Knife node show You can use all three of these methods. Q.What Is The Command You Use To Upload A Cookbook To The Chef Server? Ans: Knife cookbook upload. Q.How Do You Apply An Updated Cookbook To Your Node? Ans: We mentioned two ways. Run knife Ssh from your workstation. SSH directly into your server and run chef-client. You can also run chef-client as a daemon, or service, to check in with the Chef server on a regular interval, say every 15 or 30 minutes. Update your Apache cookbook to display your node’s host name, platform, total installed memory, and number of CPUs in addition to its FQDN on the home page. Update index.html.erb like this. hello from < /h1> – RAM CPUs Then upload your cookbook and run it on your node. Q. What Would You Set Your Cookbook’s Version To Once It’s Ready To Use In Production? Ans: According to Semantic Versioning, you should set your cookbook’s version number to 1.0.0 at the point it’s ready to use in production. Q. Create A Second Node And Apply The Awesome Customers Cookbook To It. How Long Does It Take? Ans: You already accomplished the majority of the tasks that you need. You wrote the awesome customers cookbook, uploaded it and its dependent cookbooks to the Chef server, applied the awesome customers cookbook to your node, and verified that everything’s working. All you need to do now is: Bring up a second Red Hat Enterprise Linux or Centos node. Copy your secret key file to your second node. Bootstrap your node the same way as before. Because you include the awesome customers cookbook in your run-list, your node will apply that cookbook during the bootstrap process. The result is a second node that’s configured identically to the first one. The process should take far less time because you already did most of the work. Now when you fix an issue or add a new feature, you’ll be able to deploy and verify your update much more quickly! Q. What’s The Value Of Local Development Using Test Kitchen? Ans: Local development with Test Kitchen: Enables you to use a variety of virtualization providers that create virtual machine or container instances locally on your workstation or in the cloud. Enables you to run your cookbooks on servers that resemble those that you use in production. Speeds up the development cycle by automatically provisioning and tearing down temporary instances, resolving cookbook dependencies, and applying your cookbooks to your instances.
Continue reading
Django Interview Questions
Q. What Is Django? Ans: Django is a highlevel Python Web framework that encourages rapid development and clean, pragmatic design. Developed by a fastmoving onlinenews operation, Django was designed to handle two challenges: the intensive deadlines of a newsroom and the stringent requirements of the experienced Web developers who wrote it. It lets you build highperforming, elegant Web applications quickly. Q. What Does Django Mean? Ans: Django is named after Django Reinhardt, a gypsy jazz guitarist from the 1930s to early 1950s who is known as one of the best guitarists of all time. Q.Which Architectural Pattern Does Django Follow? Ans: Django follows Model-View Controller (MVC) architectural pattern. Q. Mention What Does The Django Templates Consists Of? Ans: The template is a simple text file. It can create any text-based format like XML, CSV, HTML, etc. A template contains variables that get replaced with values when the template is evaluated and tags (% tag %) that controls the logic of the template. Q. Which Foundation Manages Django Web Framework? Ans: Django web framework is managed and maintained by an independent and non-profit organization named Django Software Foundation (DSF). Q. Is Django Stable? Ans: Yes, Django is quite stable. Many companies like Disqus, Instagram, Pinterest, and Mozilla have been using Django for many years. Q. What Are The Features Available In Django Web Framework? Ans: Features available in Django web framework are: Admin Interface (CRUD) Templating Form handling Internationalization Session, user management, role-based permissions Object-relational mapping (ORM) Testing Framework Fantastic Documentation Q. What Are The Advantages Of Using Django For Web Development? Ans: It facilitates you to divide code modules into logical groups to make it flexible to change. It provides auto-generated web admin to make website administration easy. It provides pre-packaged API for common user tasks. It provides template system to define HTML template for your web page to avoid code duplication. It enables you to define what URL is for a given function. It enables you to separate business logic from the HTML. Q. How To Create A Project In Django? Ans: To start a project in Django, use the command $django-admin.py and then use the following command: Project _init_.py| manage.py settings.py urls.py Q. How Can You Set Up The Database In Djanago? Ans: To set up a database in Django, you can use the command edit mysite/setting.py , it is a normal python module with module level representing Django settings. By default, Django uses SQLite database. It is easy for Django users because it doesn't require any other type of installation. In the case of other database you have to the following keys in the DATABASE 'default' item to match your database connection settings. Engines: you can change database by using 'django.db.backends.sqlite3' , 'django.db.backeneds.mysql', 'django.db.backends.postgresql_psycopg2', 'django.db.backends.oracle' and so on Name: The name of your database. In the case if you are using SQLite as your database, in that case database will be a file on your computer, Name should be a full absolute path, including file name of that file. Note: You have to add setting likes setting like Password, Host, User, etc. in your database, if you are not choosing SQLite as your database. Q. What Does The Django Templates Contain? Ans: A template is a simple text file. It can create any text-based format like XML, CSV, HTML, etc. A template contains variables that get replaced with values when the template is evaluated and tags (%tag%) that controls the logic of the template. Q. Is Django A Content Management System (cms)? Ans: No, Django is not a CMS. Instead, it is a Web framework and a programming tool that makes you able to build websites. Q. What Is The Use Of Session Framework In Django? Ans: The session framework facilitates you to store and retrieve arbitrary data on a per-site visitor basis. It stores data on the server side and abstracts the receiving and sending of cookies. Session can be implemented through a piece of middleware. Q. How Can You Set Up Static Files In Django? Ans: There are three main things required to set up static files in Django: 1.Set STATIC_ROOT in settings.py 2.run manage.py collectsatic 3.set up a Static Files entry on the PythonAnywhere web tab Q. How To Use File Based Sessions? Ans: You have to set the SESSION_ENGINE settings to "django.contrib.sessions.backends.file" to use file based session. Q. What Is Some Typical Usage Of Middlewares In Django? Ans: Some usage of middlewares in Django is: Session management Use authentication Cross-site request forgery protection Content Gzipping, etc. Q. What Does Of Django Field Class Types Do? Ans: The Django field class types specify: The database column type. The default HTML widget to avail while rendering a form field. The minimal validation requirements used in Django admin. Automatic generated forms. Q. What Constitutes Django Templates? Ans: Template can create formats like XML,HTML and CSV(which are text based formats). In general terms template is a simple text file. It is made up of variables that will later be replaced by values after the template is evaluated and has tags which will control template’s logic. Q. How Do You Use Views In Django? Ans: Views will take request to return response. Let’s write a view in Django : “example” using template example.html , using the date-time module to tell us exact time of reloading the page. Let’s edit a file called view.py, and it will be inside randomsite/randomapp/ To do this save and copy following into a file: from datatime import datetime from django.shortcuts import render def home (request): return render(request, ‘Guru99_home.html’, {‘right_now’: datetime.utcnow()}) You have to determine the VIEW first, and then uncomment this line located in file urls.py # url ( r ‘^$’ , ‘randomsite.randomapp.views.home’ , name ‘example’), This will reload the site making changes obvious. Q. What Makes Up Django Architecture? Ans: Django runs on MVC architecture. Following are the components that make up django architecture: Models: Models elaborate back-end stuffs like database schema.(relationships) Views: Views control what is to be shown to end-user. Templates: Templates deal with formatting of view. Controller: Takes entire control of Models.A MVC framework can be compared to a Cable TV with remote. A Television set is View(that interacts with end user), cable provider is model(that works in back-end) and Controller is remote that controls which channel to select and display it through view. Q. What Does Session Framework Do In Django Framework? Ans: Session framework in django will store data on server side and interact with end-users. Session is generally used with a middle-ware. It also helps in receiving and sending cookies for authentication of a user. Q. Mention Caching Strategies That You Know In Django? Ans: Few caching strategies that are available in Django are as follows: File sytem caching In-memory caching Using Memcached Database caching Q. Why Django Should Be Used For Web-development? Ans: It allows you to divide code modules into logical groups to make it flexible to change To ease the website administration, it provides auto-generated web admin It provides pre-packaged API for common user tasks It gives you template system to define HTML template for your web page to avoid code duplication It enables you to define what URL be for a given function It enables you to separate business logic from the HTML Everything is in python Q. What Do You Think Are Limitation Of Django Object Relation Mapping(orm)? Ans: If the data is complex and consists of multiple joins using the SQL will be clearer. If Performance is a concern for your, ORM aren’t your choice. Genrally. Object-relation-mapping are considered good option to construct an optimized query, SQL has an upper hand when compared to ORM. Q. Mention What Does The Django Field Class Types? Ans: Field class types determines: The database column type The default HTML widget to avail while rendering a form field The minimal validation requirements used in Django admin and in automatically generated forms Q. List Out The Inheritance Styles In Django? Ans: In Django, there is three possible inheritance styles Abstract base classes: This style is used when you only wants parent’s class to hold information that you don’t want to type out for each child model Multi-table Inheritance: This style is used If you are sub-classing an existing model and need each model to have its own database table Proxy models: You can use this model, If you only want to modify the Python level behavior of the model, without changing the model’s fields Q. Mention What Command Line Can Be Used To Load Data Into Django? Ans: To load data into Django you have to use the command line Django-admin.py load data. The command line will search the data and loads the contents of the named fixtures into the database. Q. Explain What Does Django-admin.py Make Messages Command Is Used For? Ans: This command line executes over the entire source tree of the current directory and abstracts all the strings marked for translation. It makes a message file in the local directory. Q. Explain The Use Of Session Framework In Django? Ans: In Django, the session framework enables you to store and retrieve arbitrary data on a per-site-visitor basis. It stores data on the server side and abstracts the receiving and sending of cookies. Session can be implemented through a piece of middleware. Q. Explain How You Can Use File Based Sessions? Ans: To use file based session you have to set the SESSION_ENGINE settings to “django.contrib.sessions.backends.file” Q. Explain The Migration In Django And How You Can Do In Sql? Ans: Migration in Django is to make changes to your models like deleting a model, adding a field, etc. into your database schema. There are several commands you use to interact with migrations. Migrate Makemigrations Sqlmigrate To do the migration in SQL, you have to print the SQL statement for resetting sequences for a given app name. django-admin.py sqlsequencreset Use this command to generate SQL that will fix cases where a sequence is out sync with its automatically incremented field data. Contact for more on Django Online Training
Continue reading
Exchange Server Interview Questions
Q.What is Exchange Server? Ans: Exchange Server is a Microsoft’s Messaging system which provides Industry leading Email, calendaring and unified Messaging Q.What are the different Exchange 2003 versions? Ans: Exchange server 5.5 Exchange Server 2000 Exchange Server 2003 Exchange Server 2007 Exchange Server 2010 Q.What are the diff erences between Exchange Sever 2003 Standard and Enterprise Editions? Following are the difference between Exchange server 2003 Standard and Enterprise Edition. Ans: Feature Standard Edition Enterprise Edition Storage groups support 1 storage group 4 storage groups Number of databases per storage 2 databases 5 databases group Individual database size 16 gigabytes (GB) Maximum 16 terabytes, limited only by hardware Exchange Clustering Not supported Supported X.400 connector Not included Included Q.What are the main differences between Exchange 5.5 and Exchange 2000/2003? Ans: Exchange 2000 does not have its own directory or directory service; it uses Active Directory instead. Exchange 2000 uses native components of Windows 2000 (namely, IIS and its SMTP, NNTP, W3SVC and other components, Kerberos and others) for many core functions. SMTP is now a full peer to RPC, and is it the default transport protocol between Exchange 2000 servers. Exchange 2000 supports Active/Active clustering and was recently certified for Windows 2000 Datacenter. Exchange 2000 scales much higher. It boasts conferencing services and instant messaging. Q.What are the minimum hardware requirements for Exchange Server 2003? Ans: Processor – Pentium 133 MHz Operating System – Windows 2000 SP3 Memory – 256 MB Disk Space – 200 MB for system files and 500 MB where Exchange Server installation. File System – NTFS Q.What are the steps involved in Exchange Server installation? Ans: Prerequisites Installation – ASP .Net, IIS, SMTP, NNTP and WWW services Installation Forest Preparation Domain Preparation Exchange Server 2003 Installation Q.Why not install Exchange on the same machine as a DC? Ans: The main reason behind not to install Exchange Server is, when we used to restart the Exchange server for any reason it will take lot of time to shut down the Exchange Server services. Q.What can you do and what will be the effect if ASP.NET service is not available while installing Exchange Server 2003? Ans: ASP .Net files are important for authentication, delegation and securing the web publication. Before installing exchange Server 2003 ASP .Net should be installed. Q.What are Exchange Server 2003 deployment tools? Ans: The Exchange Server 2003 Deployment Tools are a compilation of old and new Microsoft Product Support Services (PSS) support tools that you can use to prepare Microsoft Exchange Server 5.5 and the Microsoft Active Directory directory service infrastructure for the installation of Microsoft Exchange Server 2003. Installation and Upgrade Prerequisites Enabling Windows Services DCDiag Tool NetDiag Tool ForestPrep DomainPrep Q.What are the Windows versions supported by Exchange Server 2003? Ans: Windows 2000 Service pack 3 ( Standard, Enterprise and Datacenter Edition) Windows 2003 Service pack 1 ( Standard, Enterprise and Datacenter Edition) Q.In which domains domainprep must be run? Ans: The Forest root Domain All domain that will contain Exchange Server 2003 All Domain that will contain Exchange Mailbox enable objects. Q.What is ForestPrep? Ans: Forest prep updates the schema and configuration partition in Active directory. Extend the schema to include Exchange server 2003 specific classes and attributes To run the Forest Prep, Administrator should have Schema and Enterprise Admin permission over the Domain Q.What is DomainPrep? Ans: Domain Prep prepares the Domain partion in Active Directory. Forest prep should be run only once in forest where Domain Prep should be run in following Servers. The Forest root Domain All domain that will contain Exchange Server 2003 All Domain that will contain Exchange Mailbox enable objects Q.Which two groups are created by DomainPrep? Ans: The domain prep switch creates the groups and permissions required by exchange Server 2003. Two security groups created Exchange Enterprise Servers – Domain Local group contains all Exchange Server in a forest Exchange Domain Server – Global Group that contains all Exchange servers running in the Domain that you have selected. Q.What DomainPrep does? Ans: Domain Prep Updates the Domain partition and Creates a Two New Security Groups for Exchange Server 2003 Exchange Enterprise Servers Exchange Domain Servers Q.How to run ForestPrep? Ans: Go to the Command prompt and Type the following D:\setup\i386\setup.exe /forestprep Where D drive represents the CD drive. Note: it will ask for the Administrator Account that has the required permission to run the Setup. Q.How unattended installation of Exchange Server 2003 works? Ans: Unattended installation are useful for rapidly deploying subsequent Exchange Server 2003 installation into an existing organization. The process of creating the file is essentially the same as the process for a manual setup selecting the component you want to install and the installation path, choosing whether to create a new organization or to join existing one; agreeing the license and so on.. Instead of doing manual installation the Exchange installation wizard writes the configuration file to .ini file. specifically for use with the /unattendfile setup switch to start the installation. Q.When you can use the unattended installation of Exchange Server 2003? Unattended Installation of Exchange Server 2003 is very useful when you are going to install Exchange Server remotely. It’s also useful when you are deploying number of New Exchange Server in Existing Organization. We can save time deploying multiple servers by automating the Entire installation procedure Q.What’s New in Exchange Server 2013? Ans: Below are the new features in Exchange Server 2013 New Unified Management console called Exchange Admin Center Server Roles architecture changed to two Server Roles – Mailbox Server Role and Client Access Server Role Public Folders are now in Mailbox Databases which can be replicated to other mailbox databases Site Mailbox introduced to allow the user to access the SharePoint sites and emails from Outlook 2013, using the same client interface Exchange 2013 Offers greater integration with SharePoint 2013 and Lync 2013 Brand new Outlook Web App optimized for tablets and mobile devices and also for desktops and laptops Unified Messaging in Exchange 2013 comes with same voice mail features in Exchange 2010, but the architecture change to have only two server roles made all the UM related components, services and functionality are now available in Mailbox Server Role Users can Move Mailboxes in batches where it has an option to send mailbox move reports as emails Lot of enhancement in the Database Availability Group like Managed Availability and Managed Store etc Exchange work load is a new feature in Exchange 2013 defined for the purpose of Exchange System Resource Management Exchange 2013 is completely rewritten so deploying and keeping exchange 2013 to up to date is now easier Data Loss Preventions is a new feature which allows to protective the company sensitive data and inform user of internal compliance policies Q.What is Exchange Admin Center? Ans: Exchange Admin Center is the new web based Exchange Management Console for Exchange Server 2013, it allows for ease of use and is optimized for management of On-Premises, Online and hybrid Exchange deployments. EAC replaces Exchange Management Console and Exchange Control Panel, but ECP is still the url for Exchange Admin Center New features in Exchange Admin Center List View – More than 20,000 objects can be viewed in EAC, where legacy ECP allows only 500 objects Add/Remove columns for recipients Public folders can be managed from Exchange Admin Center Long running process will be notified in the notification bar Role Based Access Control user editor can be done from EAC Unified Messaging tools like call statistics, and user call logs can be accessed from EAC Q.Explain the Exchange 2013 Architecture? Ans: Legacy versions of Exchange 2007 and Exchange 2010 released with 5 server roles, ie, Mailbox, Client Access, Hub Transport, Unified Messaging and Edge Transport server. Server role architecture has been changed in Exchange 2013 which released with only two server roles. Mailbox Server role – It holds the same function of Mailbox, the client access protocols, Hub Transport and Unified Messaging server role in Exchange 2010 Client Access Server role – work as the client access server role in Exchange 2010, Exchange 2010 work as only stateless server, where it doesn’t do any data rendering, and nothing will be stored or queued in Client Access Server role. CAS offers all usual client access protocols: HTTP, POP and IMAP, and SMTP Q.Why Exchange 2013 architecture has been changed with two server roles? Ans: Exchange 2007 and 2010 were architect with certain technology constraint that existed at that time, where CPU performance was the Key constraint when Exchange 2007 was released and to alleviate the situation Server roles were introduced. However server roles in Exchange 2007 and Exchange 2010 are tightly coupled Nowadays, CPU horse power is less expensive and it is not a constrain factor, with that constraint lifted, primary goal for Exchange 2013 is simplicity of scale, hardware utilization and failure isolation. So Microsoft reduced the number of server roles to two as Client Access Server Role and Mailbox Server Role Q.What are the benefits on the architecture change by having two server roles in Exchange 2013? Ans: Having two server roles with Mailbox Server includes all the traditional components fount in Exchange 2010: the client access protocols, transport service, mailbox databases and unified messaging and the Client Access Server role to provide authentication, limited redirection and proxy services. New architecture provides the below benefits The Client Access Server and Mailbox Server become “Loosely Coupled”. All process and activity for a specific mailbox occurs on the mailbox server that holds the active database copy, eliminating concerns of version compatibility between CAS and Mailbox Server Version Upgrade flexibility – Client Access server can be upgraded independently and in any order. Session affinity to client access server role is not required – In Exchange 2013, Client access and mailbox components available on the same mailbox server, because the client access server simply proxies all connections to a specific mailbox server, no session affinity is required Only two namespace is required for Exchange 2013 Q.What is change related to MAPI access using outlook in Exchange 2013? Ans: Two server role architects changed the outlook client connectivity behaviour; RPC is no longer a supported direct access protocol. This means that all the outlook connectivity must take place using RPC over HTTP knows as Outlook Anywhere. Because of this behaviour, there is no need to have RPC client access service on CAS which reduces two name spaces that required for site-resilient solution. Q.Explain the change in outlook client connection behaviour when compared with Exchange 2010? Ans: Outlook clients no longer connect to a server FQDN as they have done in all previous versions of Exchange. Outlook uses Autodiscover to create a new connection point comprised of mailbox GUID, @ symbol, and the domain portion of the user’s primary SMTP address. This change results in a near elimination of the unwelcome message of “Your administrator has made a change to your mailbox. Please restart.” Only Outlook 2007 and higher versions are supported with Exchange 2013. Q.What is Managed Store in Exchange 2013? Ans: Managed store is the name of the newly rewritten information store process, Microsoft.Exchange.Store.Service.exe andMicrosoft.Exchange.Store.worker.exe, it is integrated with Microsoft Exchange replication server to provide higher availability through improve resiliency. Also the Managed store has been architected to enable more granular management of resource consumption and faster root cause analysis though improved diagnostics. Managed Store works with replication service to manage mailbox databases, which continues to ESE as DB engine, Exchange 2013 includes change the mailbox database schema that provides many optimization over previous versions of Exchange and Replication Services is responsible for all availability related to Mailbox Servers. This change provides faster database failover and better disk failure handling. Q.What is a Site Mailbox? Ans: Site Mailbox is a new type of mailbox in Exchange 2013, where it improves collaboration and user productivity by allowing access to both documents in a SharePoint site and email messages in outlook 2013 using the same client interface. Q.What happened to Public Folders in Exchange 2013? Ans: Special type mailbox called Public folder mailbox introduced in Exchange Server 2013, which will store both the hierarchy and public folder content. This provides an advantage of the existing high availability and storage technologies of the mailbox store. Legacy Public folder database concept not in exchange 2013 and Public Folder replication now user the continuous replication model as like Mailbox Database. Q.How the Mail flow Occur in Exchange Server 2013? Ans: Due the architectural change, Mail flow in Exchange 2013 occurs via Transport Pipeline, it is collection of Transport services, connections, components and queues that work together to route the messages to the categorizer in the transport service on a mailbox server inside the organization. Messages from outside organization enter the transport pipeline through a receive connector in the Front End Transport service on a client access server and then routed to the Transport Service on a Mailbox Server and the Mailbox Transport Delivery Service deliver the email to local mailbox database Message from inside organization enter the Transport Service on a Mailbox Server in following ways, receive connector, Pick Up or Replay Directory, Mailbox Transport Service or Agent Submission. Those emails can be relayed to Front End transport service on Client Access Server using the Transport Service on Mailbox Server and sent outside Q.Explain the New Transport Pipeline in Short? Ans: Front End Transport Service in Client Access Server acts as a stateless proxy for all inbound and outbound external SMTP traffic for Exchange 2013 organization. It won’t inspect message content, only communicates with the Transport Service on a Mailbox Server, and doesn’t queue any messages locally. Transport Service in Mailbox server is identical to Hub Transport server, it handles all SMTP mail flow for the organization, perform message categorization, and performs content inspection. It won’t communicate directly with Mailbox Database that task will be handled by Mailbox Transport Service. So the Transport Service routes messages between the mailbox transport service, the transport service and the front end transport service. Mailbox Transport Service running on Mailbox server consists of two separate services: the Mailbox Transport Submission Service and Mailbox Transport Delivery Service. Mailbox Transport Delivery Service receives emails from Transport Service on the local or different mailbox server and connects to the local mailbox databases using Exchange RPC to deliver the message Mailbox Transport Submission Service connects to local mailbox database using RPC to retrieve messages and submits the messages over SMTP to the Transport Service on the local Mailbox server or other Mailbox Servers. Q.What are the enhancements in Batch Mailbox Move on Exchange 2013? Ans: Below are the enhancements on Exchange 2013 Batch Mailbox Moves, Multiple mailboxes move in large batches. Email notification during move with reporting. Automatic retry and automatic prioritization of moves. Primary and personal archive mailboxes can be moved together or separately. Option for manual move request finalization, which allows you to review a move before you complete it. Q.What are new options included in Exchange 2013 related with High Availability and Site Resilience? Ans: Managed Availability – Internal Monitoring and recovery are integrated to prevent failures, proactively restore services, and initiate failovers automatically or alert admin to take action. Managed Store – Integrated with Microsoft Exchange Replication Service to provide higher availability Multiple Databases per disk – Exchange 2013 supports multiple databases both active and passive on same disk. Automatic Reseed – If a disk fails, database copy stored on that disk is copied from the active database copy to a spare disk on the same server. Automatic recovery from storage failures Lagged copies can now care themselves to a certain using automatic log play down Single copy alert task is removed and included in the managed availability component DAG networks can be automatically configuration by the system bases on the configuration settings, DAG now can distinguish between MAPI and Replication networks and configure DAG network automatically. Q.What are the features discontinued in Exchange 2013 when compared with Exchange 2010? Ans: Below are few features that are discontinued in Exchange 2013 Hub Transport Server Role and Unified Messaging Server Exchange Management Console and Exchange Control Panel Outlook 2003 support and RPC over TCP method of Mailbox access is removed S/MIME, Search folders and Spell check in OWA removed Linked Connectors are removed Anti-spam agents can be management only from Exchange Management Shell Connection filtering agent removed Managed Folder are removed Tools like Exchange Best Practice Analyzer, Mail flow troubleshooter, performance monitor, performance troubleshooter and routing log viewer are removed Q.What are the features discontinued in Exchange 2013 when compared with Exchange 2007? Ans: Below are few features discontinued in Exchange 2013 that are available in Exchange Server 2007 Storage Groups and Public Folder Databases Exchange WebDAV API and ESE streaming backup API High Availability concepts CCR, LCR, SCR & SCC are not available Export-Mailbox / Import-Mailbox Cmdlets and Move-Mailbox cmdlets set Managed folders Exchange Server Interview Questions Exchange Server Online Training Q.What’s New in Outlook Web App 2013? Ans: Lot of new feature available in Outlook Web App 2013 below are few new features Apps can be accessed from Outlook Web App Contacts can be linked to see all the data in a single view Ability to connect to user’s linkedIn account and add the contacts automatically to OWA Multiple calendars can be viewed in a Merged view Streamlined user interface for tablets and smartphones which supports use of touch Q.Inform the features that are not available on OWA 2013? Ans: Below are the features that are available in previous versions but not available on Exchange Server 2013 Outlook Web APP Shared Mail folders access is not available Distribution list moderation cannot be done from OWA S/MIME support Reading pane at the bottom of the window Ability to reply to email messages sent as attachments Search folders in not available Q.What are the prerequisites required to Install Exchange Server 2013? Ans: Below prerequisites are required to install Exchange Server 2013 Operating System: Windows Server 2008 R2 Service Pack 1 or later Windows Server 2012 Additional Prerequisites: Microsoft .NET Framework 4.5 (pre install in Windows Server 2012) Windows Management Framework 3.0 Microsoft Unified Communications Managed API 4.0, Core Runtime 64-bit Microsoft Office 2010 Filter Pack 64 bit Microsoft Office 2010 Filter Pack SP1 64 bit ADDS and few windows features Domain Controller: Forest functional level has to Windows Server 2003 Schema Master run on Windows Server 2003 SP2 or later Q.On which Operating System Database Availability Group is supported? Ans: DAG is supported on Windows Server 2012 Standard or Datacenter Editions or Windows Server 2008 R2 SP1 Enterprise Edition. Windows Server 2008 R2 Sp1 standard edition won’t support DAG Q.On what conditions Exchange 2013 can coexists with previous version of Exchange servers? Ans: Exchange 2003 and Earlier versions: Not Supported Exchange 2007: Exchange 2007 SP3 with Update Rollup 10 on all Exchange 2007 servers and Exchange 2013 CU2 and later can coexist Exchange 2010: Exchange 2010 SP3 on all Exchange 2010 Servers and Exchange 2013 CU2 or later can coexist Q.What are the Editions available in Exchange Server 2013? Ans: Exchange 2013 available in two editions: Standard Edition and Enterprise Edition Standard Edition allows only 5 databases to mounted (including active and passive copy) and Enterprise Edition allows 50 databases on RTM version of Exchange and 100 databases on CU2 and later versions. Recovery Database will not be counted on this. Q.What will happen to Exchange 2013 RTM version when 120 days trial period expires? Ans: Exchange 2013 functionality will not loss when trial period expires, so you can maintain lab without having to reinstall the trial version. Q.What the supportable clients that can access Exchange 2013 Mailbox? Ans: Exchange 2013 mailbox can be access by the following clients Outlook 2013 Outlook 2010 SP1 with Outlook 2010 November 2012 update Outlook 2007 SP3 with Outlook 2007 November 2010 update Entourage 2008 for Mac, Web Services Edition Outlook for Mac 2011 Q.What are the vision and Goals of Exchange Server 2010 high availability? Ans: Vision – deliver a fast, easy to deploy and operate, economical solution that can provide high availability solution for exchange server Goals – 1. deliver a high availability and site resilience that is native to exchange Enables less expensive and less complex storage Simplify administration and reduce support cost Increase end-end availability Support exchange server 2010 online Q.What are the high availability solutions introduced in Exchange Server 2010? Ans: Unified technology for high availability and site resilience New framework for high creating high available mailboxes Evolution of continuous replication Can be deployed on a range of storage option Q.What are the High Availability features introduce in Exchange Server 2010? Ans: Mailbox resiliency – unified high availability and site resiliency Database Availability Group – a group of up to 16 Mailbox servers that holds the set of replicated databases Mailbox database copy – a mailbox database (.edb files and log file) that is either active or passive copy of the mailbox database Database Mobility – the ability of a single mailbox database to be replicated to and mounted on other mailbox servers RPC Client Access Service – a Client Access Server feature that provides a MAPI endpoint for outlook clients Shadow redundancy – a transport feature that provides redundancy for messages for the entire time they are in transit Incremental deployment – the ability to deploy high availability or site resilience after the exchange is installed Exchange third party replication API – an exchange provided API that enables use of third party replication for DAG Q.What is high availability? Ans: High Availability is a solution that provide data availability; service availability and automatic recover from site failures Q.What is disaster recovery? Ans: It is a procedure used to manually a recover a failure Q.What is site resilience? Ans: Site Resilience is a disaster recovery solution used for recovery from site failure Q.What is switch over and failover? Ans: A switch over is a manual activation one or more databases when failure occurs A failover is an automatic activation of one or more databases after failure Q.What are the concepts deprecated in Exchange Server 2010? Ans: Storage groups Database identified by the servers which they live Server names as part of database name Clustered Mailbox server Pre-installation of failover cluster Running setup in failover mode Moving a CMS identity between servers Shared storage Two high availability copy limits Private and public networks Q.Explain new features in Exchange Server 2010 High Availability? Ans: No need to failover a server if a single database fails Failover and switchover occurs at the database level and not the server level With the new HA feature; we can have 100 databases per server Databases are tied to specific server can be float across servers in organization Q.Give an idea on Exchange server 2007 High Availability Architecture changes? Ans: In exchange server 2007 HA, there are four HA features available, they are LCR, SCR, SCC and CCR. The concept of LCR and SCC has been completely removed in Exchange server 2010. The concept of SCR and CCR are incorporated with the new HA feature (Database Availability Group) in Exchange Server 2010. Q.What’s new in Exchange Management Console? Ans: In Exchange Server 2010 management console, the following are the new features included Built on remote power shell and RBAC Multiple forest support Cross premises Exchange 2010 Management – includes Mailbox move Recipient bulk edit PowerShell command logging Q.What is Exchange Control Panel? Ans: ECP it’s a new and simplified web based management console and it’s a browser based management client for end user, administrators and specialist, ECP can be accessible via URL, browsers and outlook 2010, ECP deployed as part of the client access server role, Simplified user administration for management tasks and it’s RBAC aware Q.Who can use ECP and what are the manageable options? Ans: Specialist and administrators – administrator can delegate to specialist e.g. help desk operators – Change user name password etc., department administrator – change OU and e-discovery administrators – legal department. End users – comprehensive self-service tools for end users – fetch phone number, changing name and create groups Hosted customers – tenant administrators and tenant end users Q.What is ROLA BASED ACCESS CONTROL? Ans: RBAC is new authorization model in Exchange Server 2010, easy to delegate and customize permission; this replaced the permission model used in Exchange Server 2007. Your role is defined by ―What you do‖ RBAC includes self administration, used by EMC, EMS, and ECP Q.Who are all affected by RBAC in Exchange Server 2010? Administrator – Role Groups and Universal security groups – End – User – role assignment policy we can set read /write Q.How to delegate a Role ? Ans: Create the management role Change the new management roles entries by removing old entries Create a management scope if required Assign the new management role Q.What is Remote power shell in Exchange Server 2010? Ans: In Exchange 2010, the management architecture is based on Remote PowerShell included with Windows PowerShell 2.0. Remote PowerShell provides an RBAC-based permission model making it possible to grant much more granular permissions (Exchange 2007 used ACLs), standard protocols that makes it easier to manage Exchange 2010 servers through firewalls, and explicitly separates ―client‖ and ―server‖ portion of the cmdlet processing Q.What are the supportable OS platforms to install Exchange Management console? Ans: In Exchange server 2010 all functions are 64 bit only, admin tools requires 64 bit OS, Exchange management tools can be installed in 64 bit OS like vista, server 2008 and windows 7, Remote PowerShell management can be installed in x86 and x64 bit OS Q.What is federated sharing? Ans: Federated Sharing allows easy sharing of availability information, calendar, and contacts with recipients in external federated organizations Q.What are the options shared in federated sharing? Ans: Free busy information Calendar and contact sharing Sharing policy Q.Explain the federation commands in Exchange server 2010? Ans: Establish federation trust = New-federation Trust Install signing certificate on CAS servers Exchange certificate with federation gateway Prove domain ownership = domainname.com IN TXT AppId = xxxxxxxx Create DNS TXT record Add domain to trust = set-federatedOrganizationIdentifier Add-federatedDomain Must be accepted domain Q.How to establish federated sharing in Exchange Server 2010? Ans: Create trust with certificate exchange Prove domain ownership Add domains Q.What is Microsoft Federation Gateway? Ans: Exchange Server 2010 uses Microsoft Federation Gateway (MFG), an identity service that runs in the cloud, as the trust broker. Exchange organizations wanting to use Federation establish a Federation Trust with MFG, allowing it to become a federation partner to the Exchange organization. The trust allows users authenticated by Active Directory , known as the identity provider (IP), to be issued Security Assertion Markup Language (SAML) delegation tokens by MFG. The delegation tokens allow users from one federated organization to be trusted by another federated organization. With MFG acting as the trust broker, organizations are not required to establish multiple individual trust relationships with other organizations. Users can access external resources using a single sign-on (SSO) experience Q.What is Federation Trust? Ans: A Federation Trust is established between an Exchange organization and MFG by exchanging the organization’s certificate with MFG, and retrieving MFG’s certificate and federation metadata. The certificate is used for encrypting tokens Q.What is Sharing Policy? Ans: Sharing policies allow you to control how users in your organization can share calendar and contact information with users outside the organization. To provision recipients to use a particular sharing policy Q.Prerequisites to create a Sharing Policy Ans: A federation trust has been created between your Exchange 2010 organization and Microsoft Federation Gateway, and the Federated Organization Identifier is configured. Although you can create a sharing policy for any external domain, recipients from the specified domain can access your users’ information only if they have a mailbox in an Exchange 2010 organization and their domain is federated Q.Why Archive? Ans: Growing E-Mail Volume – everyone wants to have more E-mail because of this the storage, Backup disk should be increases Performance and storage issue – increase in Storage costs Mailbox quota – users are forced to manage quota PSTs – quota management often results in growing PSTs – outlook Auto Archive Discovery and Compliance issues – PSTs difficult to discovery centrally, regulatory retention schedules contribute to further volume/storage issues Q.How Archiving improved in Exchange Server 2010? Ans: Archiving improved by providing larger mailbox architecture, simple migration of PSTs back to server, discovery options, retention policies and legal hold. Large mailbox Architecture – maintains performance and provides option for DAS-SATA storage to reduce costs Archiving enables simple migration of PSTs back to server. If the archiving option sin enabled for a user, a new Mailbox will be created to the user name archive in which the user can set retention policies to move the mails to archive mailbox or the admin can set retention policies for the user mailbox. Archiving simplifies discovery, retention and legal hold Q.What are the archiving options introduced in Exchange Server 2010? Ans: Personal Archive – secondary Mailbox Node, they are the PST files of primary Mailbox Retention Policies – folder/item level and archive/delete policies Multi-Mailbox search – Role based GUI, admin can assign this permission to legal team Legal Hold – monitor or control a user from delete a mail by legal hold and searchable with Multi Mailbox Search Journaling – Journal de-duplication (unwanted journaling on distributed mails). One copy of journal per database and Journal decryption – HT role will do the decryption and send the decrypted copy for journalin Q.What is personal archive in Exchange Server 2010 archiving? Ans: It is a Secondary mailbox that is configured by the administrator, this appears along with user’s primary mailbox in outlook or OWA, and the PST files can be dragged and dropped to personal archive Mailbox. Mails in Primary mailbox can be moved automatically using Retention policies. Archive quota can be set separately from primary mailbox Q.What are retention policies? And what we can do with retention policies in Exchange Server 2010? Ans: Retention policy is an option to move/ delete certain mails by applying rules. We can set retention policies at Item or Folder level. Policies can be applied directly within e-mail. We can set expiration date stamped directly on e-mail. Policies can be applied to all email within a folder. We can configure delete policy to delete the mail after certain period and Archive policies to move certain mails with the certain period to archive mailbox Q.What are the Retention Policies in Exchange Server 2010? Ans: Move Policy – automatically moves messages to the messages to the archive Mailbox with the options of 6 months, 1 year, 2 years, 5 years and never – 2 years is default. Move mailbox policies helps keep mailbox under quota. This works like outlook Auto Archive without creating PSTs Delete Policy – automatically deletes messages. Delete policies are global. Removes unwanted items Move + Delete policy – automatically moves messages to archive after X months and deletes from archive after Y Months. We can set policy priority: Explicit policies over default policies; longer policies apply over shorted policies Q.What is Multi Mailbox Search? Ans: This option delegated access to search to HR, compliance, legal manager. Administrator has to provide access permission on to use this feature, this will provide an option to search all mail items ( email, IM contacts, calendar) across primary mailbox, archives. The filtering option in Multi Mailbox search includes sender, receiver, expire policy, message size, send/receive date, cc/bcc, regular expressions, IRM protected Items Q.What are E-Discovery features? Ans: Following are the E-Discovery features introduced in Exchange Server 2010 Search specific Mailboxes or DLS Export search results to a mailbox or SMTP Address Request email alert when search completes Search results organized by per original hierarchy Lot more will be added in the original release Q.What is Legal Hold and what are the features in Legal Hold? New feature in Exchange Server 2010 to monitor or control a user from deleting a Mail or Mailbox, the features available in Legal Hold are Copy edited and deleted item – this option is in Exchange server 2007 to hold the auto deleted items Set duration for auto delete – indefinite or specify time period Auto alert notification – sends alerts to users that they are on hold, eliminates manual process Search dumpster – use multi mailbox search to retrieve deleted/edited items indexed in dumpster folder Q.What is journaling and what are the journaling features in Exchange Server 2010? Ans: Journaling is an option to track mails from particular user or from a group of users. The New Features in Journaling for Exchange server 2010 are Transport Journaling – ability to journal individual Mailboxes or SMTP address and also this gives a detailed report per To/Cc//Bcc/Alt-Recipient and DL expansion Journal report de duplication – reduces duplication of journal reports. Exchange server 2010 creates one report per message Q.What is journal decryption? Ans: Journal decryption is a new feature in Exchange Server 2010, if a user sends an encrypted message to recipient and if journaling was enabled for that user, then the Hub transport Server decrypts the message and sends that decrypted message for journaling. The intended recipient will receive the encrypted message Q.What is Set Quota in Archive management? Ans: With Mailbox quota Management, we can assign mailbox size for a user. This option can be enabled from the properties of the user account, and the default settings to Mailbox quota is 10 GB Q.What is universal Inbox In OWA? Ans: Its provides a solution to have one E-Mail inbox for E—Mail, Text messages and Voice messages Can have multiple E-Mail accounts in one OWA window Q.What is federation? Ans: Federation is new feature in Exchange server 2010 to share the company users calendars to the partners. A trust relationship to be made to have this feature Q.What is continuous availability feature in Exchange Server 2010? Ans: In Exchange Server 2007, we have server to server failover scenarios, and we need to use failover clustering to configure the HA options which is very difficult to manage In Exchange Server 2010 HA modified to Database level which provides quick recoverability in disk and database failures. We can have multiple database copies up to 16 mailbox copies in a database availability group. Admin have replicate mailbox copies up to 16 replicated copies. Capabilities of having CCR and SCR into single platform Q.Continuous availability in user level? Ans: If a mailbox move is happening, the users will be stay online and there wont be be any discontinuity in sending or receiving mails Q.Explain the administration option in Exchange Server 2010? Ans: Exchange Server 2010 provides simplified administration by providing options like Compliance office can easily search for mailboxes HR can easily update the user information Help desk can easily manage mailbox quotas User can easily track the status of the message easily User can easily create own Distribution group User can modify the contact information Q.What are the security features introduced in exchange server 2007? Ans: Edge Transport Server – placed on the Edge of the Network replaced the frontend server, functionalities includes virus and spam blocking, perform antivirus and anti-spam filtering, and route the messages internal to the organization. Hub Transport Server – replaced the bridgehead server act as a policy compliance server, TLS – includes server to server Transport Layer Security for server – server message transport with a secured manner. It’s an Encryption technology. Encryption – by default Exchange 2007 encrypts the content between exchange server 2007 and outlook 2007 client. Provides full support for certificate based PKI. Q.Name the reliable and recoverable features in Exchange server 2007? Ans: Exchange 2007 holds to copies of user information in the network with the help of reliable and recoverability features introduced. Local Continuous Replication – Two copies of user information in another drive (same server) Cluster continuous Replication – holds the replication of information across the server Single Copy Cluster – configured in SAN, DAS, and ISCSI etc. NAS not supported Snapshot Backup – supported by third party vendors Q.What is Exchange Management Shell? Ans: It is a command line utility introduced in Exchange server 2007, which provides an administrator the ability to configure, administer, and manage an Exchange 2007 server environment using text commands instead of solely a graphical user interface (GUI). Q.Name the Exchange server 2007 Roles? Ans: Edge Transport Server Role – replaced the frontend server, function as firewall Hub Transport Server Role – replaced the bridgehead server, handles message routing Client Access Server Role – introduced newly, handles the client connection Mailbox Server Role – replaced the Backend server, holds the mailbox Unified Messaging Server Role – messaging solution for mobile devices, OVA etc Q.Explain Edge Transport Role? Ans: The Edge Transport Server Role is to transfer mails from inside of your organization to the outside world. This role installed on the edge of your network (perimeter Network). Main purpose is to prevent your exchange server from all kinds of Attack. Must have ports 25 (SMTP) and 50636 (LDAPS) open from it to the hub transport server on the internal LAN. Port 25 is to send mail in. Port 50636 is to replicate the Exchange information that it needs, such as changes to users’ safe and blocked senders lists Q.Explain Hub Transport Role? Ans: The Main Purpose of the Hub Transport Server Role is to transfer the mails throughout you exchange, This server role is responsible for internal mail flow, This Server role replace the bridge head servers of Exchange server 2003. This can be used as an edge transport server in Smaller Organization. This must be the first role installed in Exchange 2007. You can install the client access server role and the mailbox server role at the same time as the hub transport role, but not before. Q.Explain Client Access Server Role? Ans: The role that handles client requests for OWA, Outlook Anywhere, ActiveSync, OVA and offline address book distribution. This role must be installed after the hub transport role and before the mailbox server role. You can install the mailbox server role at the same time as the client access role, but not before. Q.Explain Mailbox Server Role? Ans: Mailbox Server holds the Mailbox database and Public folder databases for your organization. It only retains the mailbox and it won’t transfer your mails. Transferring mails between your mailbox server are handled by Hub Transport servers. The mailbox server roles will be introduced only after the installation of Hub Transport Server and Client Access Server Roles. If we are installing Mailbox server with the clustering options Like CCR, SCC, or SCR, then no other server roles to be installed with this server role. Q.Explain Unified Messaging Server Role? Ans: Functions as the interface point for the VOIP gateway or IP-PBX phone system. This Role uses the user mailboxes to be the single point for storage and access of voice mail and fax messages, in addition to their normal email. Q.What is Exchange Active sync? Ans: ActiveSync provides for synchronized access to email from a handheld device, such as a Pocket PC or other Windows Mobile device. It allows for real-time send and receives functionality to and from the handheld, through the use of push technology. Q.What is POP3? Ans: The Post Office Protocol 3 (POP3) is a legacy protocol that is supported in Exchange 2007. POP3 enables simple retrieval of mail data via applications that use the POP3 protocol. Mail messages, however, cannot be sent with POP3 and must use the SMTP engine in Exchange. By default, POP3 is not turned on and must be explicitly activated. contact for more on Exchange Server Online Training
Continue reading
Hyperion Interview Questions
Q.How do you optimize outline? Ans: Usually the outline is optimized using the hourglass design for dimension ordering i.e, · Dimension with Accounts tag · Dimension with Time tag · Largest Dense dimension · Smallest dense dimension · Smallest Sparse dimension · Largest Sparse dimension Q.What are the ways to improve performance during data loads? Ans: There are several ways to optimize load Grouping of Sparse member combinations Making the data source as small as possible Making source fields as small as possible Positioning the data in the same order as the outline Loading from Essbase Server Managing parallel data load processing Q.What are the design considerations for calculation optimization? Ans: You can configure a database to optimize calculation performance. The best configuration for the site depends on the nature and size of the database. Block Size(8Kb to 100Kb) and Block Density Order of Sparse Dimensions Incremental Data Loading Database Outlines with Two or More Flat Dimensions Formulas and Calculation Scripts Q.When does Fragmentation occur? Ans: Fragmentation is likely to occur with the following: Read/write databases that users are constantly updating with data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero Q.How can you measure fragmentation? Ans: You can measure fragmentation using the average clustering ratio or average fragmentation Quotient. Using the average fragmentation quotient Any quotient above the high end of the range indicates that reducing fragmentation may help performance Small (up to 200 MB) 60% or higher Medium (up to 2 GB) 40% or higher Large (greater than 2 GB) 30% or higher Using the average clustering ratio: The average clustering ratio database statistic indicates the fragmentation level of the data (.pag) files. The maximum value, 1, indicates no fragmentation. Q.How do you can prevent and remove fragmentation? Ans: You can prevent and remove fragmentation: To prevent fragmentation, optimize data loads by sorting load records based upon sparse dimension members. For a comprehensive discussion of optimizing data load by grouping sparse members. To remove fragmentation, perform an export of the database, delete all data in the database with CLEARDATA, and reload the export file. To remove fragmentation, force a dense restructure of the database. Q.Why is database restructuring? Ans: As your business changes, you change the Essbase database outline to capture new product lines, provide information on new scenarios, reflect new time periods, etc. Some changes to a database outline affect the data storage arrangement, forcing Essbase to restructure the database. Q.What are the types of database restructuring? Ans: The two ways by which a database restructure is triggered: Implicit Restructures Dense restructure Sparse restructure Outline-only restructure Explicit Restructures Q.What are the conditions affecting Database restructuring? Ans: Intelligent Calculation, name changes, and formula changes affect database restructuring: If you use Intelligent Calculation in the database, all restructured blocks are marked as dirty whenever data blocks are restructured. Marking the blocks as dirty forces the next default Intelligent Calculation to be a full calculation. If you change a name or a formula, Essbase does not mark the affected blocks as dirty. Therefore, you must use a method other than full calculation to recalculate the member or the database. Q.What are the files used during Restructuring? Ans: When Essbase restructures both the data blocks and the index, it uses the files described essxxxxx.pag Essbase data file essxxxxx.ind Essbase index file dbname.esm Essbase kernel file that contains control information used for db recovery dbname.tct Transaction control table dbname.ind Free fragment file for data and index free fragments dbname.otl Outline file in which is defined all metadata for a database and how data is stored Q.What are the actions that improve performance for restructuring? Ans: There are a number of things you can do to improve performance related to database restructuring: If you change a dimension frequently, make it sparse. · Use incremental restructuring to control when Essbase performs a required database restructuring. · Select options when you save a modified outline that reduce the amount of restructuring required. Q.Which restructure operations are faster? Ans: These types of restructure operations are listed from fastest to slowest: Outline only (no index or data files)· Sparse (only index files) · Dense (index files and data files) as a result of adding, deleting, or moving members and other operations · Dense (index and data files) as a result of changing a dense dimension to sparse or changing a sparse dimension to dense Q.What is Implicit Restructures? Ans: Essbase initiates an implicit restructure of the database files after an outline is changed using Outline Editor or Dimension Build. The type of restructure that is performed depends on the type of changes made to the outline Q.What is Explicit Restructures? Ans: When you manually initiate a database restructure, you perform an explicit restructure. An explicit restructure forces a full restructure of the database. A full restructure comprises a dense restructure plus removal of empty blocks. Q.What is Dense restructure? Ans: If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in the data files and creates new data files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense restructure you need to recalculate the database. Q.What is Sparse restructure? Ans: If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index and creates new index files. Restructuring the index is relatively fast; the amount of time required depends on the size of the index. Q.What is Outline-only restructure? Ans: If a change affects only the database outline, Essbase does not restructure the index or data files. Member name changes, creation of aliases, and dynamic calculation formula changes are examples of changes that affect only the database outline. Q.Explain the process of dense restructure? Ans: To perform a dense restructure, Essbase does the following: Creates temporary files that are copies of the .ind, .pag, .otl, .esm, and .tct files. Each temporary file substitutes either N or U for the last character of the file extension, so the temporary file names are .inn, essxxxxx.inn, essxxxxx.pan, dbname.otn, dbname.esn, and dbname.tcu. 2. Reads the blocks from the database files copied in step 1, restructures the blocks in memory, and then stores them in the new temporary files. This step takes the most time. 3. Removes the database files copied in step 1, including .ind, .pag, .otl, .esm, and .tct files. 4. Renames the temporary files to the correct file names: .ind, .pag, .otl, .esm, and .tct. Q.Explain the process of sparse restructure? Ans: When Essbase does a sparse restructure (restructures just the index), it uses the following files:· essxxxxx.ind· dbname.otl· dbname.esm Q.What is data compression? Ans: Essbase allows you to choose whether data blocks that are stored on disk are compressed, as well as which compression scheme to use. When data compression is enabled, Essbase compresses data blocks when it writes them out to disk. Essbase fully expands the compressed data blocks, including empty cells, when the blocks are swapped into the data cache. Generally, data compression optimizes storage use. You can check compression efficiency by checking the compression ratio statistic. Q.What are types of data compression? Ans: Essbase provides several options for data compression: Bitmap compression, the default. Essbase stores only non-missing values and uses a bitmapping scheme. A bitmap uses one bit for each cell in the data block, whether the cell value is missing or non-missing. When a data block is not compressed, Essbase uses 8 bytes to store every non-missing cell. In most cases, bitmap compression conserves disk space more efficiently. However, much depends on the configuration of the data. Run-length encoding (RLE). Essbase compresses repetitive, consecutive values --any value that repeats three or more times consecutively, including zeros and #MISSING values. Each data value that is repeated three or more times uses 8 bytes plus a 16 byte repetition factor. zlib compression. Essbase builds a data dictionary based on the actual data being compressed. This method is used in packages like PNG, Zip, and gzip. Generally, the more dense or heterogeneous the data is, the better zlib will compress it in comparison to bitmap or RLE compression. Index Value Pair compression. Essbase applies this compression if the block density is less than 3%.Index Value Pair addresses compression on databases with larger block sizes, where the blocks are highly sparse. zlib does not use this. No compression. Essbase does not compress data blocks when they are written to disk Q.When do you use RLE over Bitmap Compression? Ans: Use RLE over Bitmap When, Average block density very low (< 3%). Database has many consecutive repeating Values. Q.When do you disable compression? Ans: You may want to disable data compression if blocks have very high density (90% or greater) and have few consecutive, repeating data values. Under these conditions, enabling compression consumes resources unnecessarily. Don't use compression if disc space/memory is not an issue compared to your application. It can become a drain on the processor. Q.What are data locks? Ans: Essbase issues write (exclusive) locks for blocks that are created, updated, or deleted, and issues read (shared) locks for blocks that should be accessed but not modified. By issuing the appropriate locks, Essbase ensures that data changed by one operation cannot be corrupted by a concurrent update. Q.What is a transaction? Ans: When a database is in read/write mode, Essbase considers every update request to the server (such as a data load, a calculation, or a statement in a calculation script) as a transaction. Q.What is transaction control file? Ans: Essbase tracks information about transactions in a transaction control file (dbname.tct). The transaction control file contains an entry for each transaction and tracks the current state of each transaction (Active, Committed, or Aborted). Q.What is isolation level and what are the types of isolation levels? Ans: Isolation levels determine how Essbase commits data to disk. Essbase offers two isolation levels for transactions --committed access and uncommitted access (the default). Q.What is commited access? Ans: When data is committed, it is taken from server memory and written to the database on disk. Essbase automatically commits data to disk. There are no explicit commands that users perform to commit data blocks. Q.Talk about committed and uncommitted access? Committed: Committed at the end of a transaction. Data retained till then. All blocks in question locked. Pre-Image Access: If enabled, Read only access allowed Wait Times: Indefinite Immediate Access or no Wait No. of Seconds Specified Uncommitted: Committed only at synchronization points. Block by Block Locks. Commit Row: No of rows of data loaded when Sync point occurs. Commit Block: No. of Blocks Modified when Sync Point occurs. For Rollback, Commit Row=0 and Commit Block=0 Q.What are the advantages and disadvantages of using committed access? Ans: You can optimize data integrity by using committed access. Setting the isolation level to committed access may increase memory and time requirements for database restructure. Q.Which transaction is always in committed mode? Ans: The Spreadsheet Add-in lock and Send and the Grid API are always in Committed Access Mode Q.What are the memory caches used by Essbase to coordinate memory usage? Essbase uses five memory caches to coordinate memory usage 1. Index Cache 2. Data File Cache 3. Data Cache 4. Ans: Calculator Cache 5. Dynamic Calculator Cache Q.What is Index cache? Ans: The index cache is a buffer in memory that holds index pages. How many index pages are in memory at one time depends upon the amount of memory allocated to the cache. Q.What is Data file cache? Ans: The data file cache is a buffer in memory that holds compressed data files (.pag files). Essbase allocates memory to the data file cache during data load, calculation, and retrieval operations, as needed. The data file cache is used only when direct I/O is in effect. Q.What is Data cache? Ans: The data cache is a buffer in memory that holds uncompressed data blocks. Essbase allocates memory to the data cache during data load, calculation, and retrieval operations, as needed. Q.What is Calculator cache? Ans: The calculator cache is a buffer in memory that Essbase uses to create and track data blocks during calculation operations. Q.What is Dynamic calculator cache? Ans: The dynamic calculator cache is a buffer in memory that Essbase uses to store all of the blocks needed for a calculation of a Dynamic Calc member in a dense dimension (for example, for a query). Q.What are the memory caches used by Essbase to coordinate memory usage? Ans: Essbase uses five memory caches to coordinate memory usage Index Cache: Min -1024 KB (1048576 bytes) Default - Buffered I/O : 1024 KB (1048576 bytes);Direct I/O : 10240 KB (10485760 bytes) Opt -Combined size of all essn.ind files, if possible; as large as possible otherwise.Do not set this cache size higher than the total index size, as no performance improvement results. Data File Cache: Min - Direct I/O: 10240 KB(10485760 bytes) Default -Direct I/O: 32768 KB(33554432 bytes)Opt -Combined size of all essn.pag files, if possible; otherwise as large as possible.This cache setting not used if Essbase is set to use buffered I/O. Data Cache:Min - 3072 KB (3145728 bytes) Default - 3072 KB (3145728 bytes) Opt -0.125 * the value of data file cache size. Calculator Cache:Min - 4 bytes Max: 200,000,000 bytes Default - 200,000 bytes Opt -The best size for the calculator cache depends on the number and density of the sparse dimensions in your outline. The optimum size of the calculator cache depends on the amount of memory the system has available. Q.What is the structure of currency applications? Ans: In a business application requiring currency conversion, the main database is divided into at least two slices. One slice handles input of the local data, and another slice holds a copy of the input data converted to a common currency. Essbase holds the exchange rates required for currency conversion in a separate currency database. The currency database outline, which is automatically generated by Essbase from the main database after you assign the necessary tags, typically maps a given conversion ratio onto a section of the main database. After the currency database is generated, it can be edited just like any other Essbase database. Q.What are the three dimension that should be present in main database of currency application? Ans: The main database outline can contain from 3 to n dimensions. At a minimum, the main database must contain the following dimensions: A dimension tagged as time. A dimension tagged as accounts. A market-related dimension tagged as country. Q.What are the dimensions that should be present in currency database of currency application? Ans: A currency database always consists of the following three dimensions, with an optional fourth dimension: A dimension tagged as time, which is typically the same as the dimension tagged as time in the main database. A dimension tagged as country, which contains the names of currencies relevant to the markets (or countries) defined in the main database. A dimension tagged as accounts, which enables the application of various rates to members of the dimension tagged as accounts in the main database. A currency database, which typically includes an optional currency type dimension, which enables different scenarios for currency conversion. Q.What are the conversion methods supported by Essbase for currency applications? Ans: Different currency applications have different conversion requirements. Essbase supports two conversion methods: Overwriting local values with converted values. Keeping local and converted values. Either of these two methods may require a currency conversion to be applied at report time. Report time conversion enables analysis of various exchange rate scenarios without actually storing data in the database. Q.What is the process to build a currency conversion application and perform conversions? Ans: To build a currency conversion application and perform conversions, use the following process: Create or open the main database outline. 2. Prepare the main database outline for currency conversion. 3. Generate the currency database outline. 4. Link the main and currency databases. 5. Convert currency values. 6. Track currency conversions. 7. If necessary, troubleshoot currency conversion. Q.What is CCONV? Ans: After you create a currency conversion application, you convert data values from a local currency to a common, converted currency by using the CCONV command in calculation scripts Ex: CCONV USD;CALC ALL; Q.Can we convert the converted currency back into its local currency? Ans: You can convert the data values back to the original, local currencies by using the CCONV TOLOCALRATE command. Q.When you convert currencies using the CCONV command, are the resulting data blocks are marked as dirty or clean? Ans: When you convert currencies using the CCONV command, the resulting data blocks are marked as dirty for the purposes of Intelligent Calculation. Thus, Essbase recalculates all converted blocks when you recalculate the database. Q.What is CCTRACK? Ans: You can use the CCTRACK setting in the essbase.cfg file to control whether Essbase tracks the currency partitions that have been converted and the exchange rates that have been used for the conversions. By default CCTRACK is turned on. Q.What are the reasons to turn off CCTRACK? Ans: For increased efficiency when converting currency data between currency partitions, you may want to turn off CCTRACK. For example, you load data for the current month into the local partition, use the DATACOPY command to copy the entire currency partition that contains the updated data, and then run the conversion on the currency partition. Q.How can you turn off CCTRACK? Ans: You can turn off CCTRACK in three ways: · Use the SET CCTRACKCALC ONOFF command in a calculation script to turn off CCTRACK temporarily · Use the CLEARCCTRACK calculation command to clear the internal exchange rate tables created by CCTRACK. Set CCTRACK to FALSE in the essbase.cfg file. Q.What is LRO (Linked reporting objects)? Ans: An LRO is an artifact associated with a specific data cell in an Essbase database. LROs can enhance data analysis capabilities by providing additional information on a cell. An LRO can be any of the following: A paragraph of descriptive text (a "cell note") A separate file that contains text, audio, video, or graphics A URL for a Web site A link to data in another Essbase database Q.How do you create LRO's? Ans: Users create linked objects through Essbase Spreadsheet Add-in for Excel by selecting a data cell and choosing a menu item. There is no limit to the number of objects you can link to a cell. The objects are stored on the Essbase Server where they are available to any user with the appropriate access permissions. Users retrieve and edit the objects through the Essbase Spreadsheet Add-in for Excel Linked Objects Browser feature, enabling them to view objects linked to the selected cell. Q.Does adding or removing links to a cell does not affect the cell contents? Ans: No.LROs are linked to data cells --not to the data contained in the cells. The link is based on a specific member combination in the database. Q.Give a few examples of LRO's? Ex1: A sales manager may attach cell notes to recently updated budget items. Ex2: A finance manager might link a spreadsheet containing supporting data for this quarter's results. Ex3: A product manager might link bitmap images of new products. Ex4: A sales manager may link the URL of a company's Web site to quickly access the info on the Web Q.How does Essbase locate and retrieve linked objects? Ans: Essbase uses the database index to locate and retrieve linked objects. If you clear all data values from a database, the index is deleted and so are the links to linked objects. If you restructure a database, the index is preserved and so are the links to linked objects. Q.Do shared members share LRO's? Ans: Shared members share data values but do not share LROs. This is because LROs are linked to specific member combinations and shared members do not have identical member combinations. To link a given object to shared members, link it to each shared member individually. Q.Can you change the member combination associated with any linked object? Ans: You cannot change the member combination associated with any linked object. To move an object to another member combination, first delete it, then use Essbase Spreadsheet Addin for Excel to re-link the object to the desired member combination. Q.Why do we need to limit the LRO file sizes for storage conversion? Ans: Because Essbase stores linked files in a repository on the server and, by default, the size is unlimited. Limiting the file size prevents users from taking up too much of the server resources by storing extremely large objects. You can set the maximum linked file size for each application. If a user attempts to link a file that is larger than the limit, an error message displays. The maximum file size setting applies only to linked files and does not affect cell notes or URLs. The lengths of the cell note, URL string, and LRO descriptions are fixed. Q.What is partitioning? Ans: A partition is the piece of a database that is shared with another database. An Essbase partitioned application can span multiple servers, processors, or computers. Q.What is Essbase Partitioning? Ans: Essbase Partitioning is a collection of features that makes it easy to design and administer databases that span Essbase applications or servers. Partitioning is licensed separately from Essbase. Q.What are the types of Partitions available in Essbase? Ans: Three types of partitions are there. Transparent partition: A form of shared partition that provides the ability to access and manipulate remote data transparently as though it is part of your local database. The remote data is retrieved from the data source each time you request it. Any updates made to the data are written back to the data source and become immediately accessible to both local data target users and transparent data source users Replicated Partition: A portion of a database, defined through Partition Manager, used to propagate an update to data mastered at one site to a copy of data stored at another site. Users can access the data as though it were part of their local database. Linked Partition: A shared partition that enables you to use a data cell to link two databases. When a user clicks a linked cell in a worksheet, Essbase opens a new sheet displaying the dimensions in the linked database. The user can then drill down those dimensions. Q.What is the process for designing a partitioned database? Ans: Here is the suggested process for designing a partitioned database. Learn about partitions. Determine whether the database can benefit from partitioning. Identify the data to partition. Decide on the type of partition. Understand the security issues related to partitions. Q.What are the parts of partition? Ans: Partitions contain the following parts, Type of partition: A flag indicating whether the partition is replicated, transparent, or linke Data source information: The server, application, and database name of the data source. Data target information: The server, application, and database name of the data target. Login and password: The login and password information for the data source and the data target. Shared areas: A definition of one or more areas, or sub cubes, shared between the data source and the data target. Member mapping information: A description of how the members in the data source map to members in the data target. State of the partition: Information about whether the partition is up-to-date and when the partition was last updated. Q.What are benefits of partitioning? Ans: Partitioning applications can provide the following benefits: Improved scalability, reliability, availability, and performance of databases Reduced database sizes More efficient use of resources Data synchronization across multiple databases. Outline synchronization across multiple databases. Ability for user navigation between databases with differing dimensionality. Q.Can you define different types of partitions between the same two databases? Ans: No Q.Can a single database serve as the data source or data target for multiple partitions? Ans: Yes Q.What is overlapping partition? Ans: An overlapping partition occurs when similar data from two or more databases serve as the data source for a single data target in a partition. Q.Is overlapping partition valid in all the partitions? Ans: An overlapping partition is allowed in linked partitions, but is invalid in replicated and transparent partitions and generates an error message during validation. Q.When do you use substitution variables in partitions? Ans: Using substitution variables in partition definitions enables you to base the partition definition on different members at different times. Q.Can we use attribute values to partition a database? Yes,You can use attribute functions for partitioning on attribute values. But you cannot partition an attribute dimension. Q.Can we partition an attribute dimension? Ans: No, we cannot partition an attribute dimension. Q.What is the limitation on version and mode during partition? Ans: Both ends of a transparent, replicated, or linked partition must be on the same release level of Essbase Server. For example, if the source of a linked partition is on a Release 7.1.2 server, the target must also be on a Release 7.1.2 server. In addition, for transparent and replicated (but not linked) partitions, the application mode of both ends of the partitions must be the same--either Unicode mode or non-Unicode mode. Q.What are the major difference between ASO & BSO? Ans: If we have more dimensions (generally more than 10) then we will go for ASO that simply rollup If we have less dimensions then we will go for BSO We cannot write back in ASO we can write back in BSO Most of the dimensions are sparse in ASO Most of the dimensions are dense in BSO Q.What is "Enterprise Analytics"? Ans: ASO in System 9 is called Enterprise Analytics. Q.Explain in detail about the features of ASO? Ans: ASO databases are created specifically to deal with the requirements of very large sparse data sets with a high no of dimensions and potentially millions of members. · ASO do not have indexes or data blocks. · ASO do not use calculation scripts. Bcoz calculations are not complex. · ASO uses a new kind of storage mechanism that allows improved calculation times from 10 to100 times faster than BSO. · ASO can store up to 252 dimensional combinations. · The front end tools usually do not care if the database is ASO or BSO. Even Maxl sees minor differences. · We can have attribute dimensions in ASO. · In ASO there is no concept as dense and sparse dimensions. · We do not have two pass logic and built in time balance functionality.( time balance functionality is present from 9.3 version onwards). · Member formulas are not supported in stored hierarchies. · Only non consolidation (~) and addition (+) operators are supported in shared hierarchies. · We cannot create more than 1 database in ASO. · ASO does not utilize procedural calculation scripts. · ASO formulas are written in MDX syntax. · ASO has Accounts dimension but it is completely different from the account dimension of BSO. · ASO is read-only. You cannot write to ASO databases, but there is a workaround using transparent partitions and pointing to an attached BSO database for those duties. · You can load data to level zero members only. · The database must restructure after any members in the standard dimensions are added ,deleted or moved. In fact most actions on an ASO outline will either cause a loss of data or restructure. Q.How do you differentiate ASO applications? Ans: You can easily differentiate the ASO database in the Administrative Services Console by the red star beside the application name. Q.How do you create an ASO application? Ans: ASO has two types of hierarchies: stored and dynamic. The dimension can contain both types of hierarchies (if you enable multiple hierarchies).Other properties that need to be set for dimensions and members include · Dimension Type · Data Storage(store, never share, label only) · Member solve order Alias You can add dimensions using the visual editor or the rules files. Unlike in block storage ASO does not allow you to preview the outline changes. If you are unsure of the build file, make a backup of your outline before running the new build rule. For ASO databases after the data values are loaded into the level 0 cells of an outline, the database requires no separate calculation step. For retrieving from the ASO database, retrieve and analyze just as in BSO database. Q.How do you create an ASO database using ASO Outline Conversion Wizard ? Ans: You can also create an ASO database using ASO Outline Conversion Wizard. This wizard uses the existing BSO database to convert to an ASO database. This is advantageous because we do not need to create an ASO database from the Scratch. However we need perform reengineering of dimensions and hierarchies. Q.How do you create ASO in the Automated Way? Ans: The final way of creating an ASO application is by using "Create Application" , "Create Database" ,"Create Outline " commands using MaxL. Typically this method is used when you are running the MaxL command as a part of the batch job. **Unicode is supported for BSO databases only. **Data Mining is not supported by ASO databases. **MDX is the only mechanism for defining member calculations in databases. Unicode applications use UTF-8 encoding form to interpret and store character text, providing support for multiple character sets. To set up a Unicode application Setup a computer for Unicode support by doing one of Install the for that supports UTF-8 encoding Install a Unicode editor Set the Essbase server to Unicode Mode via Administrative Services or MaxL. Check the Unicode box when creating a new Unicode -mode application. You can also migrate from non-Unicode applications to Unicode applications (but not the other way round). Report Scripts are outdated but still can be helpful when extracting subsets of data from Essbase for online backups or feeding into other systems. The Wizards Tab of Administrative Services Console menu has the following components Migration Aggregate Storage Outline Conversion Aggregate Storage Partition User Setup Data Mining Wizard Hyperion Interview Questions Hyperion Interview Questions and Answers Hyperion Planning Interview Questions Answers Q.What are the different types of LOG Files? Ans: So many log files are there in essbase, but the important log files are Application log Essbase.log Configtool.log eas_install.log essbaseserver-install.log Q.Suppose we have assigned Generation 2 and Generation 4 as of now and think of adding generation 3 later some time. Can we build the dimension. Ans: No. If gen 2 and gen 4 exists, we must assign gen 3. Q.What are attributes? Ans: A classification of a member in a dimension. You can select and group members based on their associated attributes. You can also specify an attribute when you perform calculations and use calculation functions. Eg: The database in Sample Basic which has product dimension has some attributes like size, package type, and flavor. We can add these attributes to the dimensions where we can retrieve the data like for example to retrieve “coke with 8 Oz with bottles”, this is useful for generating reports. Q.Why do objects gets locked and when does this happens? Ans: Objects gets locked to prevent users to make simultaneous and conflicting changes to Essbase database objects. By default whenever an object is accessed through Aministrative services console or Excel spreadsheet add-in, it gets locked. Q.What is the difference between UDA's and Attribute dimensions? Ans : Attribute dimensions provides more flexibility than UDA's. Attribute calculations dimensions which include five members with the default names sum, count, min, max and avg are automatically created for the attribute dimensions and are calculate dynamically. Q.How does Attribute dimensions and UDA's impact batch calculation performance? Ans: UDA's- No Impact as they do not perform any inherent calculations. Attribute dim- No Impact as they perform only dynamic calculations. Q.What are different types of attributes? Ans: Essbase supports two different types of attributes. User-Defined attributes Simple attributes User-Defined attributes: The attributes that are defined by the user. Simple attributes: Essbase supports some attributes, they are: Boolean, date, number, and string. Q.What are filters? Ans: A method of controlling access to database cells in essbase. A filter is the most detailed level of security, allowing you to define varying access levels different users can have to individual database values. Q.What is TB First and TB Last? Ans: TB First: in the Sample.Basic database, the accounts member Opening Inventory is tagged as TB First. Opening Inventory consolidates the value of the first month in each quarter and uses that value for that month’s parent. For example, the value for Qtr1 is the same as the value for Jan. TB Last: in the Sample.Basic database, the accounts member Ending Inventory is tagged as TB Last. Ending Inventory consolidates the value for the last month in each quarter and uses that value for that month’s parent. For example, the value for Qtr1 is the same as the value for Mar. Q.How can we display UDA's in reports? How do they impact report report performance. Ans: UDA's values are never displayed in the reports and hence do not impact report performance. Q.How does Attribute dim impact report performance? Ans: They highly impact the report performance as the attributes are calculated dynamically when referenced in the report. For very large number of att dim displayed in the report, the performance could drastically reduce. Q.While loading the data, you have applied both the selection criteria as well as rejection criteria to a same record. What will be the outcome. Ans: The record will be rejected. Q.How is data stored in the Essbase database? Ans: Essbase is an file based database where the data is stored in PAG files of 2 GB each and grows sequentially. Reports Questions Q.Can we have multiple metaoutlines based on one OLAP model in Integration services? Ans: Yes Q.What are LRO's( Linked Reporting Objects)? Ans: They are specific objects like files, cell notes or URL's associated with specific data cells of Essbase database. You can link multiple objects to a single data cell. These linked objects are stored in the server. These LRO's can be exported or imported with the database for backup and migration activities. Q.What are the three primary build methods for building dimensions? Ans: 1. Generation references 2. Level references 3. Parent-Child references Q.How does UDA's impact database size? Ans: There will be no impact on the database as the UDA’s doesn’t store data in the database. Q.Can we have an metaoutline based on two different OLAp models. Ans: No. Q.Can we create UDA’s and apply it to Dense as well as Sparse dimensions? Ans: Yes Q.Types of Partitions available in Essbase? Ans: Three types of partitions are there. 1. Transparent partition: A form of shared partition that provides the ability to access and manipulate remote data transparently as though it is part of your local database. The remote data is retrieved from the data source each time you request it. Any updates made to the data are written back to the data source and become immediately accessible to both local data target users and transparent data source users 2. Replicated Partition: 3. Linked Partition: Q.What is hybrid analysis? Ans: Lower level members and associated data remains in relational database where as upper level members and associated data resides in Essbase database. Q.Why top-down calculation less efficient than a bottom-up calculation?Being less efficient, why do we use them. Ans: In the process it calculates more blocks than is necessary. Sometimes it is necessary to perform top-down calculation to get the correct calculation results. Q.On what basis you will decide to invoke a serial or parellel calculation method. Ans: If we have a single processor, we will use serial calculation but if we have multiple processors we can break the task into threads and make them run on different processors. Q.How can you display UDA’s in reports? Ans: UDA's values are never displayed in the reports and hence do not impact report performance. Q.While loading the data, you have applied both the selection criteria as well as rejection criteria to a same record. What will be the outcome? Ans: The record will be rejected. Q.What are the specified roles other than Aministrator to view sessions, disconnect sessions or kill users requests for a particular application? Ans: You should have the role of Application manager for the specified application. Q.What is block locking system? Ans: Analytic services(or Essbase Services) locks the block and all other blocks which contain the childs of that block while calculating this block is block locking system. Q.What are the three options specified in Username and Password management under security tab in Essbase server proprties. Ans: 1. Login attempts allowed before username is disabled. 2. Number of inactive days before username is diabled. 3. Number of days before user must change password. Q.Can we have multiple databases in one single application? Ans: Yes. But only one database per application is recommended. Depend on which database that you are going to create. For Example: If you are creating ASO then we can’t create more that 1 db per application. If you are creating BSO then you can create more than 1 db per application Q.How is data stored in the Essbase database? Ans: Essbase is an file based database where the data is stored in PAG files of 2 GB each and grows sequentially. Q.We have created an application as unicode mode. Can we change it later to non-unicode mode. Ans: No Q.What are the types of partitioning options available in Essbase? Ans: 1. Replicated partition. 2. Transparent partition 3. Linked partition. Q.Dynamic calc decreases the retreival time and increases batch database calculation time. How true is the statement? Ans: The statement should be just opposite. As dynamic calc members are calculated when requested, the retreival time should increase. Q.What is the role of provider services. Ans: To communicate between Essbase and Microsoft office tools. Q.A customer wants to run two instances of an Essbase server on a same machine to have both test environment and Development environment on the same server. Can he do that? Ans: Yes. We can have multiple instances of an Essbase server on a single machine and there will be different sets of windows services for all these instances. Q.Why top-down calculation less efficient than a bottom-up calculation?Being less efficient, why do we use them. Ans: In the process it calculates more blocks than is necessary. Sometimes it is necessary to perform top-down calculation to get the correct calculation results. Hyperion Financial Management Interview Questions and Answers Q.What is Hyperion Financial Management ? Ans: Oracle Hyperion Financial Management is a comprehensive, Web-based application that delivers global financial consolidation, reporting and analysis in a single, highly scalable software solution. Oracle Hyperion Financial Management utilizes today's most advanced technology, yet is built to be owned and maintained by the enterprise's finance team. What are the benefits of the Hyperion Financial Management ? BENEFITS Accelerate reporting cycles - Reduce closing cycles by days, deliver more timely results to internal and external stakeholders. Improve transparency and compliance - Helps reduce the cost of compliance (as stipulated by the Sarbanes-Oxley Act, electronic filing, and other regulatory requirements) and support disclosure requirements, such as sustainability reporting Perform strategic analysis - Spend less time on processing and more time on value-added analysis Deliver a single truth - Provide a single version of the truth to support financial management and statutory reporting Easily integrate - Integrate not only with Hyperion products but also with your existing infrastructure Q.Define Hyperion? Ans: Hyperion is Business Intelligence (BI) and Business Performance Management (BPM ) tool. It is the market leader in Operational , Financial and Strategic Planning. It contains the applications for reporting, Planning , dashboards, Analysis, score carding, consolidation, Workspace, Master Data Management and Foundation. Explain Olap and mention if it is related to HYPERION FINANCIAL MANAGEMENT ? Hyperion HFM/ Hyperion Planning both are Essbase based. They are front-end tech and Ess-base is the back end. Ess-base is a M-OLAP. There are three types of OLAP technology present in the market. These are ROLAP, MOLAP and HOLAP. An example of ROLAP is BO. Here we deal with table and they acts as a virtual cube. But if we think about Oracle Express Hyperion Essbase and Cognos then they are real cubes. Thus MOLAP. ROLAP+MOLAP=HOLAP. In case of BO, we need to join to attain the OLAP effect virtually but behind is a joined table. But in Case of MOLAP there is no concept of table it is cube only. For example a page is a table then a Book is a Cube. If the data is large then it is advisable to take up ROLAP not MOLAP. because the performance of the cubes degrade. If the size of data is around 100 GB to 150 GB then it is fine to go with MOLAP and if it is more then that then we should choose BO. Q.Explain why we use Hyperion ? Ans: We have IIS for HFM although there is no IIS for OLAP's. If there is no IIS then it is a time taking process to retrieve financial data. Is it possible to have one ASO database and one BSO database in a single application. Justify . No because ASO/BSO classification is defined at Application level and not at the database. Q.Can we have multiple databases in single application ? Ans: Yes, in this one database per application. Q.Can we start and stop an application individually and how to increase performance using this ? Ans: We can manage our server resources by starting only applications which receive heavy user traffic. When an application is started, memory is accompanied by all associated data bases. Q.Explain custom based macro ? Ans: We write Ess base calculator functions and special macro functions under Ess base macros. Custom defined macros makes use of an internal Ess base macro language that enables to combine calculation functions and also helpful to operate on multiple input parameters. Q.Explain data file cache ? Ans. It represents buffer in memory which holds compressed data files (.PAG ). Q.What does 'DOU' mean ? In reference to RPG, DOU means (Do Until).This will execute a loop matched with an End or End do For instance Eval X=1 DOU X=4 Eval X=X=1 End Do This example will go through the loop 4 times with the result of X being 4. 'DOU' is similar to Do While('DOW') where the difference lies in that the DOU will always perform 1 pass through the loop and DOW will perform the comparison and only continue into the loop if the condition meets else the program continues after the End (End Do). For example Eval X=1 DOW X=4 X=X+1 End do The result is that the loop will not execute and the value of X will be 1. Q.Which property helps us to consider using ACE ? Ans: A very little population of inter company data forces us to consider using ACE. Q.Which two functions can produce a report that includes only Elimination entities ? Ans: Two functions which can produce a report that includes only Elimination entities are a Fixed Name List and Dynamic Name List. Q.Explain Dense and Sparse Dimensions ? Ans: Dense dimension is a dimension in which most data exists for every combination of dimension members whereas sparse dimensions are the dimensions which has low probability that data will exist for every combination of dimension members. Q.What are the three primary build methods for building dimensions ? Ans: The following are the three primary build methods for building dimensions :- Generation references Level references Parent-Child references. Q.Differentiate between ASO & BSO ? Ans: We cannot write back in ASO although we can write back in BSO. Most dimensions in ASO are sparse whereas in BSO most of them are dense. We cannot create more than 1 database in ASO but we can create more than 1 db per application. If we have more than 10 dimensions then we should opt for ASO. Q.Explain attributes ? Ans: Classification of a member in a dimension is known as an attribute. We can do selection of group members based on their associated attributes. We can also specify an attribute while performing calculations and along with we can use calculation functions. As example, the db in Sample Basic with product dimension has some attributes like package type, size, and flavor. We can add attributes to the dimensions and can retrieve data. For example, to retrieve 'coke with 8 Oz with bottles'. contact for Hyperion Online Training
Continue reading
IBM WebSphere Interview Questions
Q.Explain About Web Sphere? Ans: The word web sphere popularly refers to IBM middleware technology products. Web sphere is known for its turn key operation in e business applications. It has run time components and tools which can help in creating applications which run on WAS. WAS refers to web sphere application server. Q.Explain About Web Sphere Commerce? Ans: IBM web sphere commerce has a single platform which offers complete ecommerce solutions to developers. It can be very productive if you are planning to do business with consumers, business and indirectly through channel partners. This can be used to perform business with consumers, business and channel partners altogether. Q.Detail About The Architecture Of Web Sphere? Ans: Web Sphere is built on three main components they are • Database • J2EE application server • A web server The databases which it supports are • DB2 • Oracle • Cloudscape Application server is IBMWAS and the supported web servers are • IBM server • Microsoft IIS • Sun web server. Q.State Some Of The Features Present In Web Sphere? Ans: Some of the features which are present in web sphere are: - • Order management • Web sphere commerce accelerator • Analytical and business intelligence • Open standards such as Java, EJB, etc • Web sphere commerce payments and customer care, etc. Q.Explain About Ibm Web Sphere Edge Server? Ans: Web sphere edge server is used to improve the performance of web based systems. It can be used as forward or proxy server. Basically four components are present in the web sphere they are Network dispatcher, Caching proxy, Content distribution and application service at the edge. Q.Explain About Extended Deployment? Ans : Web sphere application server extended deployment increases the functionality of the server in two main areas they are manageability and performance. Dynamic virtualization between servers is possible with the help of XD. A stand alone distributed cache was added to it under the performance header, it is known as Object Grid. Q.Explain About The Security Features Present In Was? Ans: Security model for web sphere is primarily based on JAVA EE security model. It also depends upon the operating system. User authentication and authorization mechanisms are also provided in WAS. Light weight third party authentication mechanism is the main security feature present in WAS. Q.Explain About Asymmetric Clustering? Ans: Asymmetric clustering applications are primarily used in electronic trading systems employed in banks. Some of the features are, partitions can be declared during run time and are usually run on a single cluster at a time. Work specific to a particular can be routed to that cluster. Q.Explain The Various Administrator Benefits Using Web Sphere? Ans: Web sphere almost reduces the work of server administrator as he can manage load on servers efficiently without any hassles. It also gives him flexibility to divide the load and applications among different server farms. He can also predict about the incoming load on servers. Email alerts, restart options, memory leak detection, etc. Q.Explain About Caching Proxy Of Ibm Web Sphere Edge Sphere? Ans: A caching proxy can be configured in forward direction or as a proxy. Content requested by the user is cached by edge before sending or adhering to the query. Page fragments arising from JSP or servlets are cached by Edge and the caching process is slow. Performance and scalability of J2EE applications can be increased by edge. Q.Explain About The Network Deployment Feature Present In Was? Ans: Managing singletons will be a thing of the past and it also provides hot recovery of singletons which makes you forget about your GC collected singletons. Transaction logs can stored on a shared file system. For clustering run time operations deployment manager`s role was eliminated. J2EE failover support and cell configuration support is also present. Q.Explain About Ibm Web Sphere Integration Developer? Ans: Web sphere integration developer provides an IDE to build applications based on service oriented architecture. Web sphere process server and web sphere ESB were built with WID. WID was built with RAD Eclipse based technology. Q.Explain About Compute Grid? Ans: Compute grid is also known as Web sphere batch. Web sphere extended deployment offers a Java batch processing system called as Compute Grid. This forms an additional feature to Web sphere network environment. Various features are provided which help a developer to create, manage and execute batch jobs. Job scheduler, xJCL, batch container and batch programming controller. Q.Explain About Web Sphere Mq Real Time Transport? Ans: This feature is very useful in instant messaging across different clients through intranet and internet. This supports high volume and high performance across different clients. It uses the concept of light weight transport which is again based on IP rather than the queue process. Q.Explain About Web Sphere Mq Jms Provider? Ans: Web sphere MQ and Web Sphere Business integration manager Broker are very useful in providing Java messaging services to wide range of clients (publisher –subscribe, point to point). Java classes are chiefly responsible for translating the API calls to API`s defined by web sphere. It is very useful to have knowledge of Web sphere MQ for proper configuration. Q.Explain The Attribute Channel In Web Sphere Mq? Ans: CHANNEL specifies the name of the server connection channel. Generally this is Web Sphere MQ network abstraction. The default standard used by CHANNEL is SVRCONN which is the server connection channel. This server is generally used to communicate to the queue manager by the client. Q.Is The Naming Of Connection Factory Independent Of The Name Specified By Jms Client? Ans: Yes, the naming of connection factory is independent of the name specified by JMS client. This is made possible by WAS (Web sphere application server) with its resource references. It isolates the application from object names. This feature is important because it gives us the flexibility to change the administered object without changing the JMS client code. Q.What About Master Repository? Ans: deployment manager contains the MASTER configuration and application files. All updates to the configuration files should go through the deployment manager. Q.Tell Me Ihs Executable Files, Means Bin Directory Files? Ans: Apache, ApacheMonitor, htpasswd, htdigest, htdbm, ldapstash, httpd.exe Q.Why Given The Httpd.conf File To Installation Of Plug-in? Ans: Identify the web server (port, virtual hosts) to configure the web server definition. Q.How To Configure Remote System Httpd.conf File? Ans: select web server machine (remote). Q.Several Types Of Log Files In The Appserver? Ans: system out, system err, trace, native out , native err, activity. Q.Websphere Packages? Ans: express, base, network deployment. Q.What Is The Profile? Ans: profiles are a set of files that represent a websphere application server configuration. Q.What Is The Trace? Ans: A trace is an informational record that is intended for service engineers or developers to use. As such, a trace record might be considerably more complex, verbose and detailed than a message entry. Q.What Is Heap Memory? Ans: Objects storage space for objects references created at run time in a jvm is heap memory. Q.Out Of Memory Exception Is There, How To Handle That Exception? Ans: To increase heap memory size. Q.What About Ihs? Ans: IHS (IBM HTTP Server) is one of the web servers. It serves the static content only and it takes up only http requests. Q.What About Plug-in? Ans: plug-in is one of the modules it is interface between application server and web server, the plug-in process receives the request from the client first. If the request is for dynamic content, the plug-in diverts the request to the websphere application server. If the request is for static content, the plug-in forwards it to the Http server. Q.What Is The Global Security? Ans: It provides the authentication and authorization for websphere application server domain (administration client or console). Q.How To Configure The Global Security? Ans: open console and then select security option in the right side menu, and then select localOs registry in the user registry, then enter the username, passwords. And again select global security then ltpa option then provide the password, then save the configuration. And restart the deployment server and then relogin the console. Q.What Is Ssl? Ans: ssl is a protocol for providing encrypted data communications between two processes. Q.What Is Pmi? How To Configure Pmi? Ans: monitoring and tuning–>PMI–>select any process (server1, nodeagent, dmgr) and then enable PMI–>then apply and then save. Select performance viewer–>current activity and then select enabled process and click the start monitoring button after that process select. Q.What Is The Unix Command Of All Display Server Processes? Ans: ps –ef| grep java Q.What Is Node? Ans: logical group of servers. Q.How To Start The Server? Ans: startserver.sh server1. Q.How You Get Nodeagent?what You Have To Install To Get Nodeagent? Ans: Custom Profile. Q.How To Add The Node? Ans: addnode.sh 8879 Q.What Is The Application Server? Ans: The application server provides a runtime environment in which to deploy, manage, and run j2ee applications. Q.What Is The Node? Ans: A node corresponds to a physical computer system with a distinct IP host address.The node name is usually the same as the host name for the computer. Q.How Many Types Of Profiles Are In Nd Product? Ans: 1.deployment manager profiles 2.application server profiles 3.custom profile. Q.What Is Diffrence B/w Dmgr And Other Profiles? Ans: dmgr app custom: its used for administration admin console is there plain node purpose of remaining profiles initially one app server there empty node it supports the distributed work independently environment. Put applications it is not included app server admin console is there work independently do not put applications. Q.Diff B/w 5.0 And 6.0? Ans: Web Sphere Studio 3.5, comes up with Visual Age for Java. WSAD 5.0 supports J2EE 1.3 java specifications. RAD 6.0 supports J2EE 1.4 and integrated with Eclipse 3.0, UML Visual Editor, Tomcat Jakarta, Ant scripting, EJB universal test client and SOA tools. Q.What Is The Difference Between Web Server And Application Server? Ans: Webserver serves pages for viewing in web browser, application server provides exposes businness logic for client applications through various protocols Webserver exclusively handles http requests.application server serves bussiness logic to application programs through any number of protocols. Webserver delegation model is fairly simple,when the request comes into the webserver,it simply passes the request to the program best able to handle it(Server side program). It may not support transactions and database connection pooling. Application server is more capable of dynamic behaviour than webserver. We can also configure application server to work as a webserver.Simply applic! ation server is a superset of webserver. Q.Diff B/w Weblogic And Websphere? Ans: Both BEA Weblogic and IBM’s WebSphere provide J2EE based application servers which are competitors. WebSphere leverages more on connectivity issues with MQ and legacy systems with strong dominance in J2EE. Q.Some Problem Is There In Web Server, So This Information Which Log File Contain? Ans: http.log, plugin.log Q.What Is Jdbc? Ans: jdbc is a low level pure java api used to execute sql statements. Q.What Is Datasource? Ans: A data source is associated with a jdbc provider that supplies the specific jdbc driver implementation class. Q.What Is Diff B/w Type4 And Type2? Ans: type4 1. It is pure java oriented 2.require client side software type2 1.it is not a pure java oriented driver 2.no need any client software. Q.Some Application Not Accessing, So What Is The Problem? This Information Which Log File Contains? Ans: systemout, systemerr Q.In Type3 Client Software Which Machine You Have Install? Ans: server side machine Q.Two Databases There (oracle And Db2),so I Want 3 Datasources For Oracle And 2 Data Sources For Db2 So Create 3 Datasource Names For Oracle And 2datasourcename For Db2 Is Possible Or Not? Ans: possible Q.What Is Jndi? Ans: we can register resources in the application server’s java naming and directory interface (jndi) namespace. Client applications can then obtain the references to these resource objects in their programs. Q.Why Use The Boostrap Port Number? Ans: client applications use the bootstrap port to access websphere’s built-in object request broker (orb) to use enterprise java beans in applications installed on the application server. The java naming and directory interface service provider url used by the client application needs to reference the bootstrap port to obtain an initial context for looking up ejb’s it wants to use. (For communicate two servers) Q.What Are The Appserver Components? Ans: admin server, web container, ejb container,j2c service, naming server, messaging engine, security server. Q.Ldap Port Number? Ans: 389 or 636 Q.Packages Of Websphere? Ans: express, base, network deployment Q.What Is Webcontainer? Ans: The web container provides a runtime environment for servlets, jsp’s, javabeans, and static content. Q.How To Find Out Free Diskspace From Command Prompt? Ans: du -sk (kb) du -sm (mb) Q.How To Find Out Certain Server Configuration Details Like Port No, Server Name, Node Name, Pid? Ans: through admin console. Q.Configure The Plug-in Through Admin Console Is Possible Or Not? Ans: possible Q.Where To Set The Path? Ans: environments–>websphere variables Q.Application Installed But Not Working. What Are Troubleshooting Steps? Ans: see jvm & application are up, check plugin-cfg.xml file for the root context used by the web application if it does not exist generate plugin and restart web server. Q.Applications Installed Fine, Also Generated Plugin, But Application Still Not Working, In This Case Which Log To See? Ans: plugin.log Q.Default Admin Port? Ans: 9060, ssl 9043 Q.Default Bootstrap Port? Ans: 2809 Q.How To Hit Application Without Hitting The Web Server? Ans: webcontainer port on application server. Q.In How Many Ways You Can Perform Administration? Ans: console,,,JMX Q.No Of Ways Of Doing Deployments? Ans: Admin console…..jython (jacl) scripts. Q.What Is Celldiscoveryaddress? Ans: Node uses this port to talk to DMGR. Q.What Is Nodediscoveryaddress? Ans: DMGR uses this port to talk to node Q.How Websphere Discovers A Change In Jsp And Compiles It? Ans: There is an algorithm that websphere uses to find the timestamp of .jsp and .class files. It checks that timestamp of .class file is always later than its corresponding .jsp file.
Continue reading
Informatica MDM Interview Questions
Q. What Is Mdm? Ans: Master data management (MDM) is a comprehensive method of enabling an enterprise to link all of its critical data to one file, called a master file, that provides a common point of reference. When properly done, MDM streamlines data sharing among personnel and departments. Q. Has Mdm Gone Mainstream? Do People “get It?” Ans: There is huge awareness of MDM. Gartner recently hosted a MDM conference for the first time , and they pulled in about 500 attendees. As to whether they “get it,” it depends on who you’re talking to. Most of the IT people get it. Business users understand the moniker, but they might or might not understand MDM quite as well. I find that business users often require education in terms of what it can do for them and what value it brings. With IT people, it’s a different conversation; they want to know more about the features and how we differentiate ourselves from the competition. Q. Are You Seeing Awareness Translate Into Bigger Budgets For Mdm? Ans: It’s a matter of awareness and the problem becoming urgent. We are seeing budgets increased and greater success in closing deals, particularly in the pharmaceutical and financial services industries. For rester predicts MDM will be $6 billion market by 2010, which is a 60-percent growth rate over the $1 billion MDM market last year. Gartner forecasts that 70 percent of Global 2000 companies will have a MDM solution by the year 2010. These are pretty big numbers. Q. What Are The Biggest Technical And Management Challenges In Adopting Mdm? Ans: Technical folks often have a challenge in data governance in selling the project and getting the funding. Management is looking for return on investment; they need MDM tied to quantifiable benefits that business leaders understand, like dollar amounts around ROI. Q. What Is Data Warehousing? Ans: A data warehouse is the main repository of an organization’s historical data, its corporate memory. It contains the raw material for management’s decision support system. The critical factor leading to the use of a data warehouse is that a data analyst can perform complex queries and analysis, such as data mining, on the information without slowing down the operational systems. Data warehousing collection of data designed to support management decision making. Data warehouses contain a wide variety of data that present a coherent picture of business conditions at a single point in time. It is a repository of integrated information, available for queries and analysis. Q. What Are Fundamental Stages Of Data Warehousing? Ans: Offline Operational Databases : Data warehouses in this initial stage are developed by simply copying the database of an operational system to an off-line server where the processing load of reporting does not impact on the operational system’s performance. Offline Data Warehouse: Data warehouses in this stage of evolution are updated on a regular time cycle (usually daily, weekly or monthly) from the operational systems and the data is stored in an integrated reporting-oriented data structure. Real Time Data Warehouse:Data warehouses at this stage are updated on a transaction or event basis, every time an operational system performs a transaction (e.g. an order or a delivery or a booking etc.) Integrated Data Warehouse : Data warehouses at this stage are used to generate activity or transactions that are passed back into the operational systems for use in the daily activity of the organization. Q. What Is Dimensional Modeling? Ans: Dimensional data model concept involves two types of tables and it is different from the 3rd normal form. This concepts uses Facts table which contains the measurements of the business and Dimension table which contains the context(dimension of calculation) of the measurements. Q. What Is Informatica Powercenter? Ans: Powercenter is data integration software of Informatica Corporation which provides an environment that allows to load data into a centralized location such as data warehouse. Data can be extracted from multiple sources can be transformed according to the business logic and can be loaded into files and relation targets. Q. What Are The Components Of Informatica Powercenter? Ans: Following are the various components of Informatica PowerCenter, PowerCentre Domain PowerCenter Repository Administration Console PowerCenter Client Repository Service Integration service Web Services Hub Data Analyzer Metadata Manager PowerCenter Repository Reports Q. What Is A Mapping? Ans: A mapping is a set of source and target definitions linked by transformation objects that define the rules for data transformation. Mappings represent the data flow between sources and targets. Q. What Is A Mapplet? Ans: AN mapplet is a reusable object that contains a set of transformations and enables to reuse that transformation logic in multiple mappings. Q. What Is Transformation? Ans: A transformation is a repository object that generates, modifies or passes data. Transformations in a mapping represent the operations the Integration Service performs on the data. Data passes through transformation ports that are linked in a mapping or mapplet. Q. Describe The Foreign Key Columns In Fact Table And Dimension Table? Ans: Foreign keys of dimension tables are primary keys of entity tables. Foreign keys of facts tables are primary keys of Dimension tables. Q. What Is Data Mining? Ans: Data Mining is the process of analyzing data from different perspectives and summarizing it into useful information. Q. What Is Fact Table? Ans: Fact table contains measurements of business processes also fact table contains the foreign keys for the dimension tables. For example, if your business process is “paper production” then “average production of paper by one machine” or “weekly production of paper” would be considered as measurement of business process. Q. What Is Dimension Table? Ans: Dimensional table contains textual attributes of measurements stored in the facts tables. Dimensional table is a collection of hierarchies, categories and logic which can be used for user to traverse in hierarchy nodes. Q. What Are The Different Methods Of Loading Dimension Tables? Ans: There are two different ways to load data in dimension tables. Conventional (Slow) : All the constraints and keys are validated against the data before, it is loaded, this way data integrity is maintained. Direct (Fast) : All the constraints and keys are disabled before the data is loaded. Once data is loaded, it is validated against all the constraints and keys. If data is found invalid or dirty it is not included in index and all future processes are skipped on this data. Q. What Are The Objects That You Can’t Use In A Mapplet? Ans: COBOL source definition Joiner transformations Normalizer transformations Non reusable sequence generator transformations. Pre or post session stored procedures Target definitions Power mart 3.5 style Look Up functions XML source definitions IBM MQ source definitions Q. What Are The Different Ways To Migrate From One Environment To Another In Informatica? Ans: We can export repository and import into the new environment We can use informatica deployment groups We can Copy folders/objects We can Export each mapping to xml and import in new environment Q. What Is Difference Between Mapping Parameter And Variable? Ans: A Mapping parameter is a static value that you define before running the session and it value remains till the end of the session.when we run the session PowerCenter evaluates the value from the parameter and retains the same value throughout the session. When the session run again it reads from the file for its value. A Mapping variable is dynamic or changes anytime during the session. PowerCenter reads the initial value of the variable before the start of the session and changes its value by using variable functions and before ending the session its saves the current value (last value held by the variable). Next time when the session runs the variable value is the last saved value in the previous session. Q. How To Delete Duplicate Record In Informatica? Ans: Following are ways to remove duplicate records In source qualifier use select distinct Use Aggregator and group by all fields Override SQL query in Source qualifier Q. What Are Different Type Of Repositories That Can Be Created Using Informatica Repository Manager? Ans: Standalone Repository : A repository which functions individually and is unrelated to any other repositories. Global Repository : This is a centralized repository in a domain. This repository can contain shared objects across the repositories in a domain. The objects are shared through global shortcuts. Local Repository : Local repository is within a domain . Local repository can connect to a global repository using global shortcuts and can use objects in it’s shared folders. Q. How To Find All Invalid Mappings In A Folder? Ans: Use following query SELECT MAPPING_NAME FROM REP_ALL_MAPPINGS WHERE SUBJECT_AREA='YOUR_FOLDER_NAME' AND PARENT_MAPPING_IS_VALIED 1 Q. What Are The Data Movement Modes In Informatica? Ans: Data movement modes determines how power center server handles the character data. We choose the data movement in the informatica server configuration settings. Two types of data movement modes available in informatica. ASCII mode Unicode mode Q. What Is Oltp? Ans: OLTP is abbreviation of On-Line Transaction Processing. This system is an application that modifies data the instance it receives and has a large number of concurrent users. Q. What Is Olap? Ans: OLAP is abbreviation of Online Analytical Processing. This system is an application that collects, manages, processes and presents multidimensional data for analysis and inform management purposes. Contact for more on Informatica MDM Online Training
Continue reading
Magento Interview Questions
Q. What Is Magento? Ans: Magento is an open source E-commerce software, created by Varien which is useful for online business and it has the flexible modular architecture. It is scalable and it has many control options that helps the user. Magento uses E-commerce platform which offers companies ultimate E-commerce solutions and extensive support network. Q. Why To Use Magento? Ans: The uses of Magento are: Magento is open source E-commerce software. It is scalable and offers small companies to build business. It provides the searching and sorting of products in several ways. Easily integrates with many of the third-party sites which are needed to run effective E-commerce website. Using this, customer can order or purchase number of products. There are no limits on number of purchasing products. Q. List The Web-hosting Sites Of Magento? Ans: The Web-Hosting sites of Magento are: SiteGround BlueHost HostGator Inmotion Arvixe site5 Q. What Are Disadvantages Of Magento? Ans: The disadvantages of Magento are: Magento uses larger disk space and memory. It takes much time to build the customized functionality. It is very slow compared to other E-commerce sites. It needs proper hosting environment, if the hosting environment is improper the user can face the problems. Q. Name The Web-server That Supports Magento? Ans: The Web-server that supports Magento are: Apache 2.x Nginx 1.7.x Q. What Are Magento Products? Ans: Products are the items or things that are sold in Magento. Product can be anything that is capable of satisfying customer needs. This includes both physical products and services. Q. Name The Product Types That Are Available In Magento? Ans: The product types available in Magento are: Simple Products Grouped Products Configurable Products Virtual Products Bundled Products Downloadable Products Q. What Is Inventory? Ans: Inventory allows setting a product's stock quantity. For instance, you have a product with 100 units in stock. If you set the stock availability to "Out of Stock" then it will force the item to be out of stock. Q. Name The Categories Of E-commerce? Ans: The categories of E-Commerce are: Business to Business(B2B) Business to Consumer(B2C) Consumer to Consumer(C2C) Consumer to Business(C2B) Q. What Does Rate Percent Mean In Manage Tax Rate Window? Ans: Rate Percent specify the percent of the tax rate. Q. What Does Priority Field Specifies In Manage Tax Rules? Ans: Priority field specifies when the tax should be applied to other tax rules. Q. What Is The Used Of Zero Subtotal Checkout Panel? Ans: The Zero Subtotal Checkout panel is a payment option that displays when order total is zero and not required to enter payment details for the customer. Q. What Is The Used Of 3d Secure Card Validation Field In Saved Cc? Ans: It is additional security functionality where customer needs to provide credit card password to complete the purchase order. Q. Which Are The Methods Of Paypal Payment Gateways? Ans: The two methods of PayPal Payment Gateways are: Payflow Pro (Includes Express Checkout) Payflow Link (Includes Express Checkout) Q. What Is Payflow Pro? Ans: The Payflow Pro option is customizable payment gateway which can be used with merchant account to process credit card transactions. Q. What Is Payflow Link? Ans: Payflow Link option often called as hosted payment gateway that keeps customer on your site by providing fast and easy way to add transaction processing to your site. Q. What Is The Use Of My Cart Link Panel? Ans: My Cart Link panel specifies whether the number of quantities in the cart should be shown or whether the number of different products should be shown using the Display Cart Summary field. Q. What Is Google Checkout In Magento? Ans: Google Checkout is online payment processing service provided by Google. Magento allows integration of online stores with Google checkout. It is like PayPal that simplifies the process of paying for online purchases. Q. What Is Magento Manage Order? Ans: Order management is important thing which allows business to run smoothly and keeps customers happy, making them more likely to visit your site in the future. Q. What Is Magento Google Analytics? Ans: Google Analytics is a finest Google service for those who are actively managing websites and adds analytics to Magento store including Ecommerce tracking and conversions of their websites. Q. What Is The Use Of Page Layout In Magento? Ans: Layout files are useful in rendering front pages of Magento. Q. What Is Magento Content Management System(cms)? Ans: Magento CMS (Content Management System) section is used to manage all web site pages. It is a way of promoting the products by providing valuable information to the customers and increases visibility to search engines. Q. What Are Static Blocks? Ans: Static block is a piece of content can be used anywhere in the pages. Magento allows creating blocks of content that can be used through the store and can be added to any page or another block. Q. What Are Polls? Ans: Polls are used to get customer's opinions and preferences. The poll results appear immediately after response is submitted. Q. How To Optimize The Magento Environment? Ans: Following points describe how to optimize the Magento environment: It uses complex database, so that it needs to be run on dedicated servers. Magento application could be optimized by using cloud computing. Merge you JavaScript and CSS files which reduces the load time dramatically since its loading only one merged file. Proper MySQL configuration is one of the most important aspects in terms of performance. Always upgrade to the latest Magento version allows to perform better. Q. How To Optimize Magento Configuration? Ans: The following points specify how to optimize the magento configuration: To speed up Magento performance, don't run mysql and web server on the same machine. Do not host files on your web server that you do not use. Optimization of session storage. Enabling Magento flat catalog. (Magento uses a complex and resource-intensive Entity Attribute Value based catalog). After initial catalog establishment, enabling the flat catalog can dramatically improve database querying time. Identification and disabling of unused Magento modules. Q. What Is The Process Of Code Optimization? Ans: The process of Code Optimization are: Removal of unused or unnecessary code processes. To optimize Magento performance, JavaScript and CSS files need to be compressed and aggregated. Conformance of all site images to optimal web image sizes. Identification of bottlenecks(process that causes the entire process to slow down or stop) processes in both front-end and back-end. Q. How To Improve The Performance Of Database? Ans: The following points describe how to improve the performance of database: Unused data must be cleaned up regularly for better performance. Optimization of database queries. Configuration of setting and limits of the database server (e.g. memory setting, query cache, sort buffer optimization). Q. Which Is The Php Version Used For Magento? Ans: PHP 5.4 + Q. What Is Wsld? Ans: It stands for Web Services Description Language. It is used for describing web services and how to access them. Q. What Does Only X Left Threshold Means In Stock Option? Ans: It is used to set threshold number. When the units of that product are drop to that number, it will display Only X left message on the product details page. Q. What Is Magento Payment Gateway? Ans: Payment gateway processes the credit card data securely between customer and merchant and also between merchant and the payment processor. It is like checkpoint that protects customers against attempting to gather personal and financial information from customers and also act as mediator between the merchant and sponsoring bank. Q. What Is The Process Of Order Life Cycle In Magento? Ans: Orders follow a standard life cycle process. When customer place product orders, it arrive in the administration interface with a pending status. When order is processed, the status of order changes according to current state in the processing workflow. Once the invoice is created for the order, the status changes from pending to processing status. Next it creates shipment for an order which changes the status from pending to complete status. Q. Which Are The Two Sections Present In Design Section? Ans: The two sections present in Design section are: Page Layout Custom Design Q. What Does The Page Layout Section Contains? Ans: The Page Layout section contains a Layout option which allows selecting layout as per your choice and Layout Update XML option inserts the XML code. Q. What Is The Use Of Meta Data Section While Setup New Pages? Ans: The Meta Data section contains Keywords and description of the page. Q. How To Subscribe To Newsletters Using Magento? Ans: Customer can subscribe to Newsletters using Magento. Customer can sign up for the Newsletter when he creates a new customer account which contains checkbox for signing up. For creating Newsletters you need to enable Newsletter option in your magento to make sure that customer has confirmed to receive Newsletter. Q. How To Optimize The Magento Front-end Performance? Ans: The following points show how to optimize the Magento front-end performance: Use the latest version of php, so that you can perform front-end operation much better and faster. The newest released version may cause the errors, so carefully read the release notes and check out the new version. Use the clean database to improve the performance of magento. The database logs need to be clear regularly. The database stores the automatically created logs to keep track of record session and interaction. Q. What Is Grouped Product? Ans: This is a group of simple products. In this type, you cannot specify a specific price for product; you can just specify the discount. Q. What Is Configurable Products? Ans: In this type, customer can select products according to their color and size before purchasing. Example: Cell phones obtained in different colors and sizes. Q. What Are Bundled Products? Ans: Bundled products are those products which cannot be sold separately and doesn't give any choice for end user. Q. What Is The Use Of Backorder Field In Product Stock Option Panel? Ans: If it is enabled, customer can buy products even if they are out of stock. Q. What Are Tax Rules? Ans: Tax rules are entities that combine product tax classes, customer tax classes and tax rates. Q. What Is The Use Of Manage Store Section? Ans: Manage Stores section, you will see website names, store names and Store View Name columns. Q. What Is E-commerce? Ans: E-commerce (Electronic Commerce) is a type of business that involves the commercial transaction or purchasing or selling of goods and services through electronic channels known as internet. Q. What Are The Features Of Magento? Ans: The features of Magento are: Magento provides different payment methods such as credit cards, PayPal, cheques, money order, Google checkouts. It provides shipping of products in one order to the multiple addresses. Easy to manage the orders by using admin panel. It filters the products and displays in grid or list format. Q. What Are The Advantages Of Magento? Ans: The advantages of magento are: It is user friendly E-commerce software. It is compatible with Smartphone's, tablets and other mobile devices. It provides multiple payment options so every visitor can make payment based on their preferred payment gateway. It has many extensions which supports for the development of an online store. Q. What Database Does Magento Supports? Ans: MySql Database Q. Which Is The Mysql Version Used For Magento? Ans: MySQL 5.1 Q. What Is Consumer To Business? Ans: This transaction is between consumer or customer and business or companies where consumer makes a product that the company uses to complete business. Q. Explain The Architecture Of Magento? Ans: The architecture of Magento is a typical PHP MVC (Model-View-Controller) application where the entire controller will be in one folder and all the models in another. Files are grouped together and known as modules in Magento. Q. What Are The Different Features Of Magento? Ans: Some of the basic features of Magento are: SEO Friendly Google sitemap support Reporting and analytics Customer accounts Order management Site management Payment Marketing promotion and tools International support Extremely modular architecture Q. What Is Eav In Magento? Ans: EAV stands for Entity Attribute Value. It is a technique that facilitates users to add unlimited columns to their table virtually. Q. What Are The Limitations Of Magento? Ans: There are three reasons to use UNITS in programming: Magento is written in PHP so it is comparatively slower in performance to other e-Commerce solutions. Magento requires more space and memory. It can consume gigabytes of RAM during heavy processes. It becomes complex if it is not using object-oriented programming. Q. How Can You Enhance The Magento Performance? Ans: The first Pascal standard was documented by the author of the Pascal programming language Niklaus Wirth but it was an unofficial Pascal standard. Disable the Magento log Disable any un-used modules Magento Caching Optimize your image Optimize your Server Use a Content Delivery Network (CDN) Put Stylesheets at the Top Put Scripts at the Bottom Avoid CSS Expressions Q. Explain How To Change The Magento Core Api Setting? Ans: You have to follow these steps to change Magento core API setting. Go to Admin menu, choose System -> Configuration Select Magento Core API on the left side of the Configuration Panel, under Services Click on to expand the General Settings section and you can Type the name of the Default Response Charset that you want to use Determine the Client Session Timeout in seconds Click the Save Config button when complete Q. Can All Billing Information Be Managed Through Magento? Ans: You can do the following things through the client Magento account: You can update your billing address. You can add a credit card. You can view your billing history. You can add a PayPal account. You can produce a print ready receipt. Q. What Are The Advantages Of Applying Connect Patches In Magento? Ans: In Magento, applying Connect Patches provide the following features: Enable easy installation of packages with installation and overwrite any existing translations for the same time Enhance security, by default Magento Connect uses HTTP to download extensions instead of FTP Facilitate the extension developers to create new extensions with a dash character in the name Magento administrators will be informed now who tries to install an extension with insufficient file system privileges. Q. How Can You Make Magento More Secure For The Client? Ans: You can use the following instructions to make Magento more secure for the client: Use a strong password and change them at regular interval. Disable remote access to Magento Connect Manager. Disable Downloader on production sites. Restrict access to safe IP addresses. Q. How To Configure Magento To Work With Another Domain? Ans: To configure Magento to work with another domain, you have to change the Magento base URL option in the admin area. Follow these steps: Go to Magento admin area > System > Configuration and click Web on the left menu. Select the unsecure option Edit the base URL field to change the URL that will be used for normal (HTTP) connections. Contact for more on Magento Online Training
Continue reading
Mule ESB Interview Questions
Q.What Is Mule? Ans: Mule is a lightweight event-driven enterprise service bus (ESB) and an integration platform. It is a lightweight and modular solution that could scale from an application-level messaging framework to an enterprise-wide highly distributable object broker. Q.What Difficulties Mule Does Encompass? Ans: Transport: applications can accept input from a variety of means, from the file system to the network. Data format: speaking the right protocol is only part of the solution, as applications can use almost any form of representation for the data they exchange. Invocation styles: synchronous, asynchronous, or batch call semantics entail very different integration strategies. Lifecycles: applications of different origins that serve varied purposes tend to have disparate development, maintenance, and operational lifecycles. Q. Why Mule Was Designed? Ans: Mule’s core was designed as an event-driven framework combined with a unified representation of messages, expandable with pluggable modules. These modules would provide support for a wide range of transports or add extra features, such as distributed transactions, security, or management. Mule was also designed as a programmatic framework offering programmers the means to graft additional behavior such as specific message processing or custom data transformation. Q. Why The Name Mule? Ans: There is a lot of infrastructure work to be done before we can really start thinking about implementing any logic. So this infrastructure work is regarded as “donkey work” as it needs doing for every project. A Mule is also commonly referred to as a carrier of load, moving it from one place to another. The load it specializes in moving is our enterprise information. Q. What Are Available Esbs Apart From Mule? Ans: All major JEE vendors (BEA, IBM, Oracle, Sun) have an ESB in their catalog. It is unremarkably based on their middleware technologies and is usually at the core of a much broader SOA product suite. There are also some commercial ESBs that have been built by vendors not in the field of JEE application servers, like the ones from Progress Software, IONA Technologies, and Software AG. Q. What Are Differences Between Mule And Other Commercial Esbs? Ans: Prescriptive deployment model, whereas Mule supports a wide variety of deployment strategies. Prescriptive SOA methodology, whereas Mule can embrace the architectural style and SOA practices in place where it is deployed. Mainly focused on higher-level concerns, whereas Mule deals extensively with all the details of integration. Strict full-stack web service orientation, whereas Mule’s capacities as an integration framework open it to all sorts of other protocols. Comprehensive documentation, a subject on which MuleSource has made huge progress recently. Q. What Is Model Layer In Mule? Ans: The first logical layer is the model layer. A Mule model represents the runtime environment that hosts services. It defines the behavior of Mule when processing requests handled by services. The model provides services with supporting features, such as exception strategies. It also provides services with default values that simplify their configuration. Q. What Is Service Layer In Mule? Ans: A Mule service is composed of all the Mule entities involved in processing particular requests in predefined manners.A service is defined by a specific configuration. This configuration determines the different elements, from the different layers of responsibility, that will be mobilized to process the requests that it will be open to receive. Depending on the type of input channel it uses, a service may or may not be publicly accessible outside of the ESB. Q. What Is Transport Layer In Mule? Ans: The transport layer is in charge of receiving or sending messages. This is why it is involved with both inbound and outbound communications. A transport manifests itself in the configuration by the following elements: connectors, endpoints and transformers. A transport also defines one message adapter. A message adapter is responsible for extracting all the information available in a particular request (data, meta information, attachments, and so on) and storing them in transport-agnostic fashion in a Mule message. Q. What Is Connector In Mule? Ans: A connector is in charge of controlling the usage of a particular protocol. It is configured with parameters that are specific to this protocol and holds any state that can be shared with the underlying entities in charge of the actual communications. For example, a JMS connector is configured with a Connection, which is shared by the different entities in charge of the actual communication. Q. What Is Endpoint In Mule? Ans: An endpoint represents the specific usage of a protocol, whether it is for listening/polling, reading from, or writing to a particular target destination. Hence it controls what underlying entities will be used with the connector they depend on. The target destination itself is defined as a URI. Depending on the connector, the URI will bear a different meaning; for example, it can represent a URL or a JMS destination. Q. What Is the Transformer In Mule? Ans: A transformer takes care of translating the content of a message from one form to another. It is possible to chain transformers to cumulate their effects. Transformers can kick in at different stages while a message transits through a service. Q. What Is Router In Mule? Ans: Routers play a crucial role in controlling the trajectory a message will follow when it transits in Mule. They are the gatekeepers of the endpoints of service, taking care of keeping messages on the right succession of tracks so they can reach their intended destinations. Certain routers act like the big classification yards: they can split, sort, or regroup messages based on certain conditions. Q. What Is Filter In Mule? Ans: Filters are a powerful complement to the routers. Filters provide the brains routers need to make smart decisions about what to do with messages in transit. Some filters go as far as deeply analyzing the content of a message for a particular value on which their outcome will be based. Q. What Is Component In Mule ? Ans: Components are the centerpiece of Mule’s services. Each service is organized with a component at its core and inbound and outbound routers around it. Components are used to implement a specific behavior in a service. This behavior can be as simple as logging messages or can go as far as invoking other services. Components can also have no behavior at all; in that case they are pass-through and make the service act as a bridge between its inbound and outbound routers. Q. How Message In Mule Is Composed ? Ans: A Mule message is composed of different parts: The payload, which is the main data content carried by the message. The properties, which contain the meta information much like the header of a SOAP envelope or the properties of a JMS message. Optionally, multiple named attachments, to support the notion of multipart messages. Optionally, an exception payload, which holds any error that occurred during the processing of the event. Q. What Are Configuration Builders In Mule ? Ans: Mule uses configuration builders that can translate a human-authored configuration file into the complex graph of objects that constitutes a running node of this ESB. The main builders are of two kinds: a Spring-driven builder, which works with XML files, and a script builder, which can accept scripting language files. Q. Why Spring-driven Configuration Builder Is Important Than Script Builder? Ans: The advantages of Spring-driven configuration builder It is the most popular — you are more likely to find examples using this syntax. It is the most user friendly — Spring takes care of wiring together all the moving parts of the ESB, something you must do by hand with a script builder. It is the most expressive — dedicated XML schemas define the domain-specific language of Mule, allowing you to handle higher-level concepts than the scripting approach does. Q. What Is Bridge Component In Mule? Ans: A bridge component is used to pass the messages from the inbound router to the outbound one. A bridge is a neutral component: it does not perform any action or modify messages that it processes. Q. What Tags Are Used To Configure Spring Elements In Mule? Ans : Tags like etc. are used to configure spring stuff. Q. What Are Available Approaches Used For Modularizing Configurations In Mule ? Ans: There are different following approaches that can be used when modularizing a configuration. Independent configurations – a Mule instance can load several independent configuration files side by side. Inherited configurations – main idea is to express a formal parent-child dependency between two configurations. By strongly expressing this dependency, you will have the guarantee at boot time that no configuration file has been omitted. Simply by using the same name for the parent and child models and by flagging the child as being an heir, as shown here: Imported configurations – You can easily import external Spring application context files into your Mule configuration files. The following illustrates how instance.xml would import its Spring context file: Heterogeneous configurations – It is possible to mix several styles of Mule configuration in an instance. An instance can be configured with a Groovy script and Spring XML configuration builders. Q. Give An Example Of Studio Connector In Mule? Ans: Q. Give An Example Of Http Connector In Mule? Ans: Q. When Does Mule Instantiate A Connector? Ans: If Mule figures out that one of our endpoints needs a particular connector, it will automatically instantiate one for us, using all the default values for its different configuration parameters. This is a perfectly viable approach if we are satisfied with the behavior of the connector when it uses its default configuration. This is often the case for the VM or HTTP transports. Note that Mule will name these default connectors with monikers such as connector.http.0. Q. What Is Transport Service Descriptor In Mule ? Ans: The connector has a technical configuration known as the Transport Service Descriptor (TSD). This hidden configuration is automatically used for each instance of the connector. It defines technical parameters such as what classes to use for the message receivers, requesters, and dispatchers; or the default transformers to use in inbound, outbound, and response routers. Knowing these default values is essential to grasping the behavior of transport. Q. How Many Endpoints Are There In Mule ? Ans: There are two endpoints : inbound outbound. You will use inbound and outbound endpoints to communicate between components and services inside Mule as well as with the outside world. Q. What Is An Outbound Endpoint In Mule ? Ans: Outbound endpoints are used to send data. An outbound endpoint is used to do things such as send SOAP messages, write to file streams, and send email messages. Q. What Is Global Endpoint In Mule ? Ans: An endpoint destination that is shared by several routers, it is worth creating a global endpoint. A global endpoint is not typified for inbound or outbound routing, making it usable in many different places in a configuration file. It must be named so it can actually be used in a service, which will reference the global endpoint by its name. A global endpoint can also help clarify the usage of a particular destination. Q. Why Does An Endpoint In Mule Offer An Address Attribute ? Ans: This allows us to configure a generic endpoint using the Mule 1.x style of URI-based destination addresses instead of the dedicated attributes of the specific endpoint element. Q. Give An Example Of File Endpoint In Mule? Ans: Q. What Is Streaming Property In File Connector In Mule? Ans: The value of this streaming property can be either true or false. If it is set to true then we are actually working on a stream of file data otherwise we are working with the file itself. Q. What Is Polling frequency Property In File Connector In Mule? Ans: When we want file inbound endpoints to poll their source directories for new content. This is accomplished by setting the pollingFrequency to some milliseconds value. Q. What Is Autodelete Property In File Connector In Mule? Ans: The default value of autoDelete is true. Therefore, a file inbound endpoint will, by default, remove the file from the source directory once it is read by the inbound endpoint. If you do not want to delete file automatically then you can set an autoDelete property to false. Q. What Is Fileage Property In File Connector In Mule? Ans: The fileAge property specifies how long the endpoint should wait before reading the file again. For instance, a fileAge of 60000 indicates Mule should wait a minute before processing the file again. Q. How To Send Only Certain Types Of File From One Directory To Another In Mule? Ans: Use the below element in file inbound to filter certain types of files. pattern indicates what pattern of file names should move from one directory to another directory. Q. What Is Vm Transport In Mule? Ans: The VM transport is a special kind of transport that you’ll use to send messages via memory. These messages never leave the JVM the Mule instance is running in. Q. What Is Multicasting Router In Mule? Ans: The multicasting router can send messages to multiple endpoints over different transports. The multicasting router allows you to easily move the same messages across these different endpoints. Q. What Is a Mule Transformer? Ans: It is an event, more specifically an instance of org.mule.api.MuleEvent. This object carries not only the actual content of the message but also the context of the event. Q. What Is a Mule Context? Ans: The Mule context is composed of references to different objects, including security credentials, if any, the session in which the request is processed. All internals of the ESB are accessible through Mule context Q. What Is Payload In Mule? Ans: The content of a message, also known as payload. It is wrapped in an instance of org.mule.api.MuleMessage, which provides different means of accessing the payload under different forms. A MuleMessage also contains properties, much like the header of a SOAP envelope or the properties of a JMS message, and can also have multiple named attachments. Q. What Are Different Type Of Messages In Mule? Ans: Bridge messages: Pass messages from inbound to outbound routers. Echo and log messages: Log messages and move them from inbound to outbound routers. Build messages: Create messages from fixed or dynamic values Q. Do I Need An ESB? Ans: Mule and other ESBs offer real value in scenarios where there are at least a few integration points or at least 3 applications to integrate. They are also well suited to scenarios where loose coupling, scalability and robustness are required. Q. Why Mule ESB? Ans: Mule ESB is lightweight but highly scalable, allowing you to start small and connect more applications over time. Mule manages all the interactions between applications and components transparently, regardless of whether they exist in the same virtual machine or over the Internet, and regardless of the underlying transport protocol used. There are currently several commercial ESB implementations on the market. However, many of these provide limited functionality or are built on top of an existing application server or messaging server, locking you into that specific vendor. Mule is vendor-neutral, so different vendor implementations can plug in to it. You are never locked in to a specific vendor when you use Mule. Contact for more on Mule ESB Online Training
Continue reading
Office 365 Interview Questions
Q. When Will Existing Office365 Users Gain Access To The Office 2013 Desktop Suite? Ans: Office 365 ProPlus - the Office 365 version of the Office 2013 suite is made available to existing Office 365 subscribers with an eligible subscription today using the Office Deployment Tool. It will also be available directly in Office 365 once your tenant has been upgraded. Q. What About Pricing For The New Office 365 For Business Offerings? Ans: Microsoft's Office site has comparison information for small business users and midsize/enterprise users. Q. If I Install Office 365 Will It Overwrite Office 2010 Or Can I Have Both Versions Running? Ans: Office 365 ProPlus can be installed side-by-side with previous versions of Office. Q. Would Love To Understand How Office365 Would Allow Us To Replace Dropbox? Ans: Office 365 now offers SkyDrive Pro for personal document management and sharing. It is accessible across OSes and devices, and comes with the powerful backend of SharePoint Online for versioning, backup/restore, review management, and direct integration with the Office Web Apps. Q. Is It Possible For Current P (small-business) Plan Customers To Upgrade To The M (mid-size Business) Plan? Ans: Existing customers on the P1 version of Office 365 will be able to upgrade to the Midsize Business SKU later this year. New Office 365 Small Business and Small Business Premium customers will also be able to upgrade to the Midsize Business SKU later this year. Q. How Much Space Will Be Available In Sky Drive Pro Per User? Ans: 7GB, up from 500MB in the previous Office 365 experience. Q. Will Organizations Using Live@edu Be Able To Move Their Sky Drive Files Directly To Office 365 For Education, Now That Sky Drive Is Part Of The Offering? Ans: Students and teachers who have been using SKyDrive with Live@edu will continue to have access to their documents and files stored on SkyDrive. They can easily move files between their consumer SkyDrive and SkyDrive Pro provided by the organizations with Office 365.There is no way for an organization to move these files on behalf of the user. Q. Will Existing Yammer Accounts Still Work? Ans: Yes. Q. Can I Integrate Yammer With Sharepoint? Ans: Yes. Yammer and SharePoint already can be connected using available web parts to pull in data from Yammer, and Open Graph technology to push information from 3rd party systems into the Yammer stream. (MJF: Here's Microsoft's timetable for fully integrating Yammer and SharePoint . On the pricing side, March 1 is an important milestone date.) Q. To What Extent Are Different Mobile Devices Supported With The New Office 365? Ans: You'll find more details on support for various types of devices here. (MJF: As my CNET colleague Jay Greene noted yesterday, Microsoft still isn't saying anything new/more about plans to make Office available on the iPad.) Q. When Will Existing Office365 Users Be Required To Move? Can We Move On Our Own Schedule? Ans: The service upgrade for existing Office 365 customers is progressing now. Once you are notified of the upgrade you can defer that one time for 45-60 days. For more information about the service upgrade refer to: http://community.office365.com/en-us/wikis/office_365_service_updates/office-365-service-upgrade-center-for-enterprise.aspx Q. Is The Cost Of The Project Online Service On Top Of An Office 365 E(n) Plan?) Ans: Project Online is an additional service to Office 365 E Plans which delivers enterprise Project, Program and Portfolio Management. (MJF: The same is true of Visio Online). Q. Is It Possible To Mix And Match Different O365 Subscriptions? Ie. Having Some Users On An E1 Plan And Others On The E2/midsize Business Plan? Ans: You can mix plans within a plan family so yes you can mix Enterprise E1, E3, E4, K1 plans. You can also mix Small Business plans P1 and P2 plans. M plans cannot be mixed with any others. Q. Are Blackberry Cloud Services Available On The New Office 365? Ans: Yes. They are available now for the New Office 365. Q. Will There Be Still An E2 Plan? Ans: We are including Office Web Apps included in E1, which enables even more of our customers to get use of Office Web Apps while simplifying our lineup. E2 will continue to be available for existing customers. I've seen a bit of confusion about what constitutes an upgrade vs. an update in Office 365 land. The easiest way to think about this is an upgrade is a whole new version of Office 365 (similar to what a new release would be in Exchange, SharePoint, Lync and/or Office). An update is a set of more minor fixes and updates, which the Office 365 team has said it plans to provide quarterly to those using the hosted services. Contact for more on Office 365 Online Training
Continue reading
Oracle Access Manager Interview Questions
Q.What Is Single Sign On? Ans: Single Sign-On allows users to sign on once to a protected application and gain access to the other protected resources within the same domain defined with same authentication level. Q.What Is Multi Domain Single Sign-on? Ans: Multi Domain SSO gives users the ability to access more than one protected resource (URL and Applications), which are scattered across multiple domains with one time authentication. Q.What Is The Authentication Mechanism Used By Oracle Access Manager? Ans : ObSSOCookie and it is stateless. Q.Explain Various Security Modes Present In Oracle Access Manager? Ans: Open: Allows unencrypted communication. In Open mode, there is no authentication or encryption between the AccessGate and Access Server. The AccessGate does not ask for proof of the Access Server’s identity and the Access Server accepts connections from all AccessGates. Similarly, Identity Server does not require proof of identity from WebPass. Simple: Supports encryption by Oracle. In Simple mode communications between Web clients (WebPass and Identity Server, Policy Manager and WebPass, and Access Server and WebGate are encrypted using TLS v1. In both Simple and Cert mode, Oracle Access Manager components use X.509 digital certificates only. This includes Cert Authentication between WebGates and the Access Server where the standard cert-decode plug-in decodes the certificate and passes certificate information to the standard credential_mapping authentication plug-in. For each public key there exists a corresponding private key that Oracle Access Manager stores in the aaa_key.pem file for the Access Server (or ois_key.pem for Identity Server). Cert: Requires a third-party certificate. Use Cert (SSL) mode if you have an internal Certificate Authority (CA) for processing server certificates. In Cert mode, communication between WebGate and Access Server, and Identity Server and WebPass are encrypted using Transport Layer Security, RFC 2246 (TLS v1). Q.Explain The Architecture Of Oracle Access Manager? Ans: Oracle Access Manager architecture mainly consists for components such as Identity Server, WebPass, Policy Manager, Access Server and a WebGate. Identity Server is a standalone C++ server which communicates directly with LDAP. It also receives requests and sends response to Webpass. WebPass is a web server plugin that passes info between identity server and webserver. It redirects HTTP requests from browser to Access Server, and sends Identity XML SOAP requests to Identity Server. Policy Manager (PMP or PAP) is a web server plugin that communicates directly with user, configuration and policy repositories. Access Server is a stand alone C++ server and is also called PDP. It receives requests from & sends responses to WebGates/AccessGates. It also communicates with LDAP. It answers Access Server SDK requests. WebGate (PEP) is a web server plugin that passes info between webserver and access server. It passes user authentication data to access server for processing. Q.What Are The Obssocookie Contents? Ans: Cookie contains encrypted session token and non-encrypted data. This Encrypted Session Token consists of : DN of the authenticated user, level of auth scheme, ip address of client to which cookie was issued, time the cookie is issued, time the cookie was last updated. If the user is not idle, then cookie will get automatically updated at a fixed interval to prevent session timeout. The updated interval is the 1/4 th of idle session timeout of accessgate. The Unencrypted ObSSOCookie data contains cookie expiry time, domain in which cookie is valid, additional flag that determines if cookie can only be sent using SSL. Q.What Is The Key Used For Encrypting The Obssocookie? Ans: Shared Secret key. It is configured in the Identity Admin console and can be generated by the OAM administrator. Q.What Happens If The Obssocookie Is Tampered? Ans : When access system generates ObSSOCookie, MD-5 hash is taken from session token. So when the user is authenticated again using the cookie, the MD5 hash is compared with original cookie contents. MD-5 hash is a one-way hash, hence it cant be unencrypted. Access server compares the cookie contents with hash. If both are not same, then cookie is tampered in the interim. This cookie does not contain username and password. Q.What Is The Difference Between Webgate And Accessgate? Ans: WebGate is an out-of-the-box plug-in that intercepts Web resource (HTTP) requests and forwards them to the Access Server for authentication and authorization. An AccessGate is a custom webgate that can intercept requests of HTTP and non-HTTP resources. Q.What Are The Major Parameters Defined In An Authentication Scheme? Ans: The authentication scheme level which defines the level of the security defined for an application. Q.Explain The Flow When A User Requests For An Application Protected By Oracle Access Manager? Ans: The following steps describes the flow when a user makes a request to access a resource protected by the Oracle Access Manager: User requests for a resource through a web browser. The Webgate intercepts the requests and checks with the Access Server whether the resource is protected or not. If the resource is not protected, then the user will be shown the requested resource. If the resource is protected, then Access Server will check with policy manager the authentication scheme configured for that resource. User will be prompted to enter their credentials as per the auth scheme defined for the resource. Webgate will send the credentials to the Access Server to check it against the backend (LDAP server). Upon successful authentication, Access server checks whether the user is authorized to access the resource or not. If the user is authorized, then the Access Server will create the session id and passes it to the webgate. An ObSSOCookie is created and will be sent to the user browser and the user will be shown the requested resource. If the user is not authorized, then an error page (if its defined in policy domain) will be shown to the user. Q.Explain The Flow Of A Multi Domain Single Sign-on? Ans: Multi Domain SSO gives users the ability to access more than one protected resource (URL and Applications), which are scattered across multiple domains with one time authentication. For multi domain SSO to work, Access Servers in all domains must use same policy directory. Multi domain works only with web gates, not Access Gates. Within each individual domain, each web gate must have same “primary HTTP cookie domain”. In Multi Domain SSO environment, we should designate one web server (where web gate is installed) as “Primary Authentication Server”. Primary Authentication Server acts as a central server for all authentications in multi domain environment. In general the webgate installed in the domain where Access server resides will be designated as the primary authentication server. Lets assume that OAM components are installed in host1.domain1.com and we will designate host1.domain1.com as the primary authentication server. Host2.domain2.com with web gate (ex: webgate2) installed. A resource, abc.html, is protected with Form base authentication on host1.mydomain1.com A resource, xyz.html, is protected with Basic over LDAP authentication on host2.mydomain2.com. Following are the steps that explain how multi domain SSO works: User initiates a request for a Web page from a browser. For instance, the request could be for host2.mydomain2/xyz.html. Webgate2 (on host2.domain2.com) sends the authentication request back through the user’s browser in search of primary authentication server. In this example you have designated host1.domain1.com to be the primary authentication server. The request for authentication is sent from the user’s browser to the primary authentication server, host1.domain1.com. This request flows to the Access Server. The user logs in with the corresponding authentication scheme and the obSSO cookie is set for host1.domain1.com. The Access Server also generates a session token with a URL that contains the obSSO Cookie. The session token and obSSOCookie are returned to the user’s browser. The session token and obSSOCookie are sent to host2.domain2.com The Web gate (webgate2) on host2.domain2.com sets the obSSOCookie for its own domain (.domain2.com) and satisfies the user’s original request for the resource host2.domain2.com/xyz.html. User gets the resource. On the same browser if user accesses the host1.domain1.com page then resource will be presented without asking credentials as obSSOCookie is already available with .domain1.com (see step 3). Q.What Is An Access Server Sdk? Ans: The Access Manager Software Developer’s Kit (SDK) enables you to enhance the access management capabilities of the Access System. This SDK enables you to create a specialized AccessGate. The Access Manager SDK creates an environment for you to build a dynamic link library or a shared object to perform as an AccessGate. You also need the configureAccessGate.exe tool to verify that your client works correctly. Q.What Is An Identity Xml? Ans: IdentityXML provides a programmatic interface for carrying out the actions that a user can perform when accessing a COREid application from a browser. For instance, a program can send an IdentityXML request to find members of a group defined in the Group Manager application, or to add a user to the User Manager. IdentityXML enables you to process simple actions and multi-step workflows to change user, group, and organization object profiles. After creating the IdentityXML request, you construct a SOAP wrapper to send the IdentityXML request to WebPass using HTTP. The IdentityXML API uses XML over SOAP. We pass IdentityXML parameters to the COREid Server using an HTTP request.This HTTP request contains a SOAP envelope.When WebPass receives the HTTP request, the SOAP envelope indicates that it is an IdentityXML request rather than the usual browser request. The request is forwarded to the COREid Server, where the request is carried out and a response is returned. Alternatively, you can use WSDL to construct the SOAP request. The SOAP content looks like this, SOAP envelope (with oblix namespace defined), SOAP body (with authentication details), actual request (with application name and params). The application name can be userservcenter, groupservcenter or objservcenter (for organizations). Q.What Is An Sspi Connector And Its Role In Oracle Access Manager Integrations? Ans: The Security Provider for WebLogic SSPI (Security Provider) ensures that only appropriate users and groups can access Oracle Access Manager-protected WebLogic resources to perform specific operations. The Security Provider also enables you to configure single sign-on between Oracle Access Manager and WebLogic resources. The WebLogic security framework provides Security Service Provider Interfaces (SSPIs) to protect J2EE applications. The Security Provider takes advantage of these SSPIs, enabling you to use Oracle Access Manager to protect WebLogic resources via: User authentication User authorization Role mapping The Security Provider consists of several individual providers, each of which enables a specific Oracle Access Manager function for WebLogic users: Authenticator: This security provider uses Oracle Access Manager authentication services to authenticate users who access WebLogic applications. Users are authenticated based on their credentials, such as user name and password. The security provider also offers user and group management functions. It enables the creation and deletion of users and groups from the BEA WebLogic Server. It also provides single sign-on between WebGates and portals. Identity Asserter: Like the Authenticator, this security provider uses Oracle Access Manager authentication services to validate already-authenticated Oracle Access Manager users using the ObSSOCookie and to create a WebLogic-authenticated session. Authorizer: This security provider uses Oracle Access Manager authorization services to authorize users who are accessing a protected resource. The authorization is based on Oracle Access Manager policies. Role Mapper: This security provider returns security roles for a user. These roles are defined in Oracle Access Manager, and they are provided by Oracle Access Mana ger using return actions on a special authentication policy. This authentication policy contains a resource with a URL prefix of /Authen/Roles. Role Mapper maps these roles to predefined security roles in WebLogic. Q.Explain The Integration And Architecture Of Oam-oaam Integration? Ans: Using these products in combination will allow you fine control over the authentication process and full capabilities of pre-/post- authentication checking against Adaptive Risk Manager models. The OAAM’s ASA-OAM integration involves two Oracle Access Manager AccessGates: one for fronting the Web server (a traditional WebGate) to Adaptive Strong Authenticator and one for the embedded AccessGate. The access server SDK to be installed and configureAccessGate tool to be run. The ASA bharosa files to updated with ASDK location. An application to be protected using ASA authentication scheme and to be tested for ASA landing page for login. Here is how the flow goes: User requests for a resource. Webgate acting in the front end for ASA application will intercept the request and will redirect to the ASA application. The user enter credentials and the Access SDK setup in the ASA application will contact the Access gate which inturn contacts the access server for validating the credentials. Upon successful authentication, access server will generate obSSOCookie and will forwards it to the browser. Then the user will be shown the requested resource. Q.Explain Iwa Mechanism In Oracle Access Manager? Ans: The OAM has a feature which enables Microsoft Internet Explorer users to automatically authenticate to their Web applications using their desktop credentials. This is known as Windows Native Authentication. user logs in to the desktop machine, and local authentication is completed using the Windows Domain Administrator authentication scheme. The user opens an Internet Explorer (IE) browser and requests an Access System-protected Web resource. The browser notes the local authentication and sends a token to the IIS Web server. The IIS Web server uses the token to authenticate the user and set up the REMOTE_USER HTTP header variable that specifies the user name supplied by the client and authenticated by the server. The WebGate installed on the IIS Web server uses the hidden feature of external authentication to get the REMOTE_USER header variable value and map it to a DN for the ObSSOCookie generation and authorization. The WebGate creates an ObSSOCookie and sends it back to the browser. The Access System authorization and other processes proceed as usual. The maximum session timeout period configured for the WebGate is applicable to the generated ObSSOCookie. Q.Explain Various Major Params Defined In Webgate Instance Profile? Ans: Hostname: name of the machine hosting the access gate. Maximum User Session Time: Maximum amount of time in seconds that a user’s authentication session is valid, regardless of their activity. At the expiration of this session time, the user is re-challenged for authentication. This is a forced logout. Default = 3600. A value of 0 disables this timeout setting. Idle Session Time (seconds): Amount of time in seconds that a user’s authentication session remains valid without accessing any AccessGate protected resources. Maximum Connections: Maximum number of connections this AccessGate can establish. This parameter is based on how many Access Server connections are defined to each individual Access Server. This number may be greater than the number allocated at any given time. IPValidationException: IPValidationException is specific to WebGates. This is a list of IP addresses that are excluded from IP address validation. It is often used for excluding IP addresses that are set by proxies. Maximum Client Session Time :Connection maintained to the Access Server by the AccessGate. If you are deploying a firewall (or another device) between the AccessGate and the Access Server, this value should be smaller than the timeout setting for the firewall. Failover Threshold: Number representing the point when this AccessGate opens connections to Secondary Access Servers. If you type 30 in this field, and the number of connections to primary Access Servers falls to 29, this AccessGate opens connections to secondary Access Servers. Preferred HTTP Host : Defines how the host name appears in all HTTP requests as they attempt to access the protected Web server. The host name in the HTTP request is translated into the value entered into this field regardless of the way it was defined in a user’s HTTP request. Primary HTTP Cookie Domain: This parameter describes the Web server domain on which the AccessGate is deployed, for instance, .mycompany.com. IPValidation: IP address validation is specific to WebGates and is used to determine whether a client’s IP address is the same as the IP address stored in the ObSSOCookie generated for single sign-on. Q.What Is Policy Manager Api? Ans: The Policy Manager API provides an interface which enables custom applications to access the authentication, authorization, and auditing services of the Access Server to create and modify Access System policy domains and their contents. Q.When Do You Need An Access Gate? Ans: An access gate is required instead of a standard webgate when you need to control access to a resource where OAM doesnot provide OOTB solution. These might include: protection for non-http resources (EJB, JNDI etc.,) Implementation of SSO to protect a combination of http and non-http resources. A file called obAccessClient.xml is stored in the server where access gate is installed. this file contains config params entered through the configureAccessGate tool. Q.Explain The Flow When A User Makes A Request Protected By An Access Gate (not Webgate)? Ans: The flow is shown below: The application or servlet containing the access gate code receives resource request from the user. The access gate code constructs ObResourceRequest structure and access gate contacts Access server to find whether resource is protected or not. The access server responds. If the resource is not protected, access gate allows user to access the resource. Otherwise.., Access Gate constructs ObAuthenticationScheme structure to ask Access Server what credentials the user needs to supply. The access server responds. The application uses a form or some other means to fetch the credentials. The AccessGate constructs ObUserSession structure which presents user details to Acc Server. If credentials are proven valid, access gate creates a session token for the user and then sends an authorization request to the access server. Access server validates if the user is authz to access that resource. Access gate allows user to access the requested resource. Q.Explain How Form Login Works If The Form Login Page Is Present In Different Domain From Oam? Ans : The mechanism here is same as how the multi domain SSO works. Importantly, all of the activities for form authentication are carried out between the browser and one web server. Now, suppose you want to access a resource http://www.B.com/pageB.html but still be authenticated by the login form on www.A.com. The authentication scheme required by pageB needs to have a redirect URL set to http://www.A.com. The WebGate at www.B.com redirects you to the NetPoint URL obrareq.cgi on www.A.com, with a query string that contains the original request (wu and wh). The WebGate on www.A.com will determine that you need to do a form login for that resource, so it will set the ObFormLoginCookie with the wu and wh values from the query string, but will set the ru field to /obrareq.cgi. WebGate on A then redirects your browser to the login form on A. When you post your credentials back to A, the ObFormLoginCookie is set back. WebGate on A authenticates your userid and password, sets the ObSSOCookie for the .A.com domain and redirects you back to the ru value from the ObFormLoginCookie, which is /obrareq.cgi. This time when your browser requests http://www.A.com/obrareq.cgi, it will pass the ObSSOCookie. WebGate will then redirect your browser back to the B webserver, http://www.B.com/obrar.cgi, with the cookie value and the original URL in the query string. The WebGate on www.B.com will extract the cookie value and set the ObSSOCookie for domain .B.com, and finally redirect you to http://www.B.com/pageB.html that you originally requested. Contact for more on OAM Online Training
Continue reading
Oracle ADF Interview Questions
Q.What is Oracle ADF? Ans: Oracle ADF is an commercial java/j2ee framework, which is used to build enterprise applications. It is one of the most comprehensive and advanced framework in market for J2EE Q.What are the advantages of using ADF? Ans: Following are the advantages of using: It supports Rapid Application Development. It is based on MVC architecture Declarative Approach (XML Driven) Secure Reduces maintenance cost and time SOA Enabled Q.What are various components in ADF? Ans: Oracle ADF has following components ADF Business Components(Model) ADF Faces (View) ADF Task flows(Controller) Q.What is the return type of Service Methods? Ans: Service Methods can return Scalar or Primitive Data types. Q.Can Service Methods return type Void? Ans: Yes, Service Methods can Return type Void Q.Can Service Methods return Complex Data types? Ans: No, service methods can return only primitive/scalar data types. Q.Which component in ADF BC manages transaction ? Ans: Application Module, manages transaction. Q.Can an entity object be based on two Database Objects(tables/views) or two Web services ? Ans: No entity objects will always have one to one relationship with a database object or web service. Q.Where is that we write business rules/validations in ADF and why? Ans: We should be writing validations at Entity Object level, because they provide highest degree of reuse. Q.What is Managed Bean? Ans: Managed bean is a java class, which is initialized by JSF framework. It is primarily used to hold view and controller logic. It is also used to execute java code to be executed on a user action like Button Click. Q.What are Backing Bean? Ans: Backing beans are those managed beans which have 1:1 mapping with a page. They have getters and setters for all the components in the related page. Q.What is difference between managed and backing beans? Ans: Backing bean has 1:1 relation with page whereas managed beans can be used in multiple pages. Backing beans scope is limited to the page whereas managed beans can have other scopes too. Q.What is a Task flow? Ans: Task flow is the controller of an ADF application, it provides us an declarative approach to define the control flow. It is used to define the navigation between pages and various task flow activites. Q.What are the different types/categories of Task flows ? Ans: Task flows are of two categories: Bounded and Unbounded. Q.What is the difference between Bounded and Unbounded task flows? Ans: Differences between Bounded and Unbounded task flows : Bounded task flows can be secured but Unbounded can’t. Bounded taskflows can accept parameter and return values but unbounded taskflows don’t support parameters Bounded taskflows has a single entry point or a default activity but unbounded taskflows have multiple entry points. Bounded taskflows can be called from other bounded/unbounded taskflows but unbounded cannot be called or reused. Bounded taskflows support transactions unbounded don’t Q.What are the various access scopes supported by ADF? Ans: ADF Faces supports the following scopes Application Scope Session Scope PageFlow Scope Request Scope Backing Bean Scope. Q.Describe life cycle of a ADF Page? Ans: ADF page is an extension of JSF and has following phases in its lifecycle Initialize Context: In this phase the adf page initializes the LifecycleContext with information that will be used during the Lifecycle. Prepare Model: In this phase ui model is prepared and initialized. In this phase page parameters are set and methods in the executable section of the page definition of the ADF page are executed. Apply Input Values: This phase handles the request parameters. The values from the HTML are sent to the server and applied to the page binding in page definitions. Validate Input Values: This phase validates the values that were built in the Apply input values phase Update Model: Validated values supplied from user are sent to ADF business components data model Validate Model Updates: In this phase the business components will validate user supplied values. Invoke Application: This phase will process the ui events stack built during the life cycle of page and also fire navigational events Prepare Render: This is the final phase where HTML code is generated from the view tree. Q.What is PPR and how do you enable Partial Page Rendering (PPR)? Ans: PPR is a feature supported by ADF Faces, using which we can render a small portion of a HTML Page, without refreshing the complete page. It is enabled by. Setting AutoSubmitproperty to true on the triggering element. Setting the PartialTriggersproperty of target component to refer to component id of the triggering element. Q.What is Action Listener? Ans: An action listener is a class that wants to be notified when a command component Fires an action event. An action listener contains an action listener method that Processes the action event object passed to it by the command component Q.What are businesses Component In ADF. Describe them? Ans: All of these features can be summarized by saying that using ADF Business Components for your J2EE business service layer makes your life a lot easier. The key ADF Business Components that cooperate to provide the business service Implementations are: ■ Entity Object An entity object represents a row in a database table and simplifies modifying its data by handling all DML operations for you. It can encapsulate business logic for the row to ensure your business rules are consistently enforced. You associate an entity object with others to reflect relationships in the underlying database schema to create a layer of business domain objects to reuse in multiple applications. ■ Application Module An application module is the transactional component that UI clients use to work with application data. It defines an up datable data model and top-level Procedures and functions (called service methods) related to a logical unit of work Related to an end-user task. ■ View Object A view object represents a SQL query and simplifies working with its results. You use the full power of the familiar SQL language to join, project, filter, sort, and Aggregate data into exactly the “shape” required by the end-user task at hand. This Includes the ability to link a view object with others to create master/detail Hierarchies of any complexity. When end users modify data in the user interface, Your view objects collaborate with entity objects to consistently validate and save The changes Q.What is Top Link? Ans: Top Link is an Object-Relational Mapping layer that provides a map between the Java objects that the model uses and the database that is the source of their data. By default, a session is created named default. In the following steps, you create a new session Q.What is Managed Bean? Ans: JavaBean objects managed by a JSF implementation are called managed beans. A managed bean describes how a bean is created and managed. It has nothing to do with the bean’s functionality. Managed bean is about how the bean is created and initialized. As you know, jsf uses the lazy initialization model. It means that the bean in the particular scope is created and initialized not at the moment when the scope is started, but on-demand, i.e. when the bean is first time required. Q.What is Backing Bean? Ans: Backing beans are JavaBeans components associated with UI components used in a page. Backing-bean management separates the definition of UI component objects from objects that perform application-specific processing and hold data. Backing bean is about the role a particular managed bean plays. This is a role to be a server-side representation of the components located on the page. Usually, the backing beans have a request scope, but it is not a restriction. The backing bean defines properties and handling-logics associated with the UI components used on the page. Each backing-bean property is bound to either a component instance or its value. A backing bean also defines a set of methods that perform functions for the component, such as validating the component’s data, handling events that the component fires and performing processing associated with navigation when the component activates. Q.What is view object? Ans :A view object is a model object used specifically in the presentation tier. It contains the data that must display in the view layer and the logic to validate user input, handle events, and interact with the business-logic tier. The backing bean is the view object in a JSF-based application. Backing bean and view object are interchangeable terms Q.Difference between Backing Bean and Managed Bean? Backing Beans Managed Beans A backing bean is any bean that is referenced by a form. A managed bean is a backing bean that has been registered with JSF (in faces-config.xml) and it automatically created (and optionally initialized) by JSF when it is needed. The advantage of managed beans is that the JSF framework will automatically create these beans, optionally initialize them with parameters you specify in faces-config.xml, Backing Beans should be defined only in the request scope The managed beans that are created by JSF can be stored within the request, session, or application scopes Q.What do you mean by Bean Scope? Ans; Bean Scope typically holds beans and other objects that need to be available in the different components of a web application. Q.What are the different kinds of Bean Scopes in JSF? Ans: JSF supports three Bean Scopes. viz., Request Scope: The request scope is short-lived. It starts when an HTTP request is submitted and ends when the response is sent back to the client. Session Scope: The session scope persists from the time that a session is established until session termination. Application Scope: The application scope persists for the entire duration of the web application. This scope is shared among all the requests and sessions. Q.What is the difference between JSP-EL and JSF-EL? JSP-EL JSF-EL In JSP-EL the value expressions are delimited by ${…}. In JSf-EL the value expressions are delimited by #{…}. The ${…} delimiter denotes the immediate evaluation of the expressions, at the time that the application server processes the page. The #{…} delimiter denotes deferred evaluation. With deferred evaluation, the application server retains the expression and evaluates it whenever a value is needed. Q.How to declare the page navigation (navigation rules) in faces-config.xml file in ADF 10g? Ans: Navigation rules tells JSF implementation which page to send back to the browser after a form has been submitted. We can declare the page navigation as follows: /index.jsp login /welcome.jsp This declaration states that the login action navigates to /welcome.jsp, if it occurred inside /index.jsp. Q.Setting the range of table Ans: Q.Which component in ADF BC manages transaction ? Ans: Application Module, manages transaction. Q.Can an entity object be based on two Database Objects(tables/views) or two Web services ? Ans: No entity objects will always have one to one relationship with a database object or web service Q.Where is that we write business rules/validations in ADF and why? Ans: We should ideally be writing validations at Entity Object level, because they provide highest degree of reuse. Q.What are the JSF life-cycle phases? Ans: The six phases of the JSF application lifecycle are as follows (note the event processing at each phase): Restore view 2.Apply request values; process events 3. Process validations; process events 4. Update model values; process events 5. Invoke application; process events 6. Render response Q.Explain briefly the life-cycle phases of JSF? Ans: Restore View: A request comes through the FacesServlet controller. The controller examines the request and extracts the view ID, which is determined by the name of the JSP page. 2. Apply request values: The purpose of the apply request values phase is for each component to retrieve its current state. The components must first be retrieved or created from the FacesContext object, followed by their values. 3. Process validations: In this phase, each component will have its values validated against the application’s validation rules. 4. Update model values: In this phase JSF updates the actual values of the server-side model ,by updating the properties of your backing beans. 5. Invoke application: In this phase the JSF controller invokes the application to handle Form submissions. 6. Render response: In this phase JSF displays the view with all of its components in their current state. Q.What is setActionListener? Ans: SetActionListener – The setActionListener tag is a declarative way to allow an action source ( , , etc.) to set a value before navigation. It is perhaps most useful in conjunction with the “process Scope” EL scope provided b ADF Faces, as it makes it possible to pass details from one page to another without writing any Java code. This tag can be used both with ADF Faces commands and JSF standard tags. Example of this can be as follows. Suppose we have a table “employee”. We want to fetch the salary of an employee of some particular row and want to send this salary in Next page in process scope or request scope etc.So using this we can do this. It have two attributes: From – the source of the value; can be an EL expression or a constant value To – the target for the value; must be an EL expression 1 2 to="#{processScope.salary1}"/> This setActionListener will pick value of salary of that row and store this value into salary1 variable.So anyone can use this salary As processScope.salary1 . It is very simple to use. And very useful. contact for more on Oracle ADF Online Taining
Continue reading
Oracle BPM Interview Questions
Q.What Is Level 0, Level 1 Backup? Ans: A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the data file up into a backup set just as a full backup would. A level 1 incremental backup can be either of the following types: A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0 A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0 Q.How Do You Setup The Rman Tape Backups? Ans: Configure channel as SBT_TAPE and use “ENV” parameter to set the tape configurations. Q.What Is The Init Parameter Specify The Minimum Number Of Days That Oracle Keeps Backup Information In The Control File? Ans: You can use the CONTROL_FILE_RECORD_KEEP_TIME parameter to specify the minimum number of days that Oracle keeps this information in the control file. Q.What Is The Difference Between Validate And Crosscheck ? Ans: The restore/validate and validate backupset commands test whether you can restore backups or copies. You should use: restore : validate when you want RMAN to choose which backups or copies should be tested. validate backupset when you want to specify which backup sets should be tested. Q.How Do I Go About Backing Up My Online Redo Logs? Ans : Online redo logs should never, ever be included in a backup, regardless of whether that backup is performed hot or cold. The reasons for this are two-fold. First, you physically cannot backup a hot online redo log, and second there is precisely zero need to do so in the first place because an archive redo log is, by definition, a backup copy of a formerly on-line log. There is, however, a more practical reason: backing up the online logs yourself increases the risk that you will lose. Q.What Is Backup Set? Ans : RMAN can store backup data in a logical structure called a backup set, which is the smallest unit of an RMAN backup. A backup set contains the data from one or more data files, archived redo logs, or control files or server parameter file. Q.What Is Channel? How Do You Enable The Parallel Backups With Rman? Ans: Use the ALLOCATE CHANNEL command to manually allocate a channel, which is a connection between RMAN and a database instance. To enable the parallel backups, allocate multiple manual channels in the run block or configure parallelism CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET; Q.What Is Auxiliary Channel In Rman? When Do You Need This? Ans: An auxiliary channel is a link to auxiliary instance. If you do not have automatic channels configured, then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within the same RUN command. When a Duplicate Database created or table space point in time recovery is performed Auxiliary database is used. This database can either on the same host or a different host. Q.Is It Possible To Specific Tables When Using Rman Duplicate Feature? If Yes, How? Ans: No, table based recovery not possible in RMAN duplicate command. Q.Outline The Steps Involved In Cancel Based Recovery From The Full Database From Hot Backup? Ans : RMAN doesn’t support cancel-based recovery like SQL*plus does. Q.Outline The Steps Involved In Scn Based Recovery From The Full Database From Hot Backup? Ans: startup mount; restore database UNTIL SCN 233545; recover database UNTIL SCN 233545; alter database open resetlogs; Q.How Do You Verify The Integrity Of The Image Copy In Rman Environment? Ans: Use below commands : rman> catalog datafilecopy ‘f:testsystem.dbf’; rman> backup validate check logical datafile ‘f:testsystem.dbf’; SQL> SELECT * FROM v$database_block_corruption; Q.Is It Possible To Take Catalog Database Backup Using Rman? If Yes, How? Ans: Recovery catalog is a schema stored in a database that tracks backups and stores of target databases. So better to take a export backup.How Do You Identify The Expired, Active, Obsolete Backups? Which Rman Command You Use? Ans : Obsolete backups: RMAN> report obsolete; expired backup: RMAN> list expired backup; Active database: RMAN> list backup; Q.Outline The Steps Involved In Time Based Recovery From The Full Database From Hot Backup? Ans: startup mount; restore database UNTIL TIME “TO_DATE(’28/12/2012 18:00:00′, ‘DD/MM/YYYY HH24:MI:SS’)”; recover database UNTIL TIME “TO_DATE(’28/12/2012 18:00:00′, ‘DD/MM/YYYY HH24:MI:SS’)”; alter database open resetlogs; Q.Explain The Steps To Perform The Point In Time Recovery With A Backup Which Is Taken Before The Resetlogs Of The Db? Ans: We need to list the database incarnations by using list incarnation command. shutdown the database startup mount the database Issue reset database to incarnation to reset the incarnation. Restore the database using restore command (e.g restore until scn 23243) recover database Open the database using resetlogs command RMAN> list incarnation of database; RMAN>reset database to incarnation 5; run { set until scn 234345; restore database; rec …. Q.Outline The Steps For Changing The Dbid In A Cloned Environment? Ans: shutdown Immediate startup mount Then, run the the DBNEWID utility from command line. nid target =/ SQL> alter database open resetlogs; Q.How Do You Install The Rman Recovery Catalog? Or List The Steps Required To Enable The Rman . Backup For A Target Database? Ans: Steps to be followed: Create connection string at catalog database. At catalog database create one new user or use existing user and give that user a recovery_catalog_owner privilege. Login into RMAN with connection string export ORACLE_SID rman target catalog @connection string rman&g …. Q.List Some Of The Rman Catalog View Names Which Contain The Catalog Information? Ans: RC_DATABASE_INCARNATION, RC_BACKUP_COPY_DETAILS, RC_BACKUP_CORRUPTION, RC_BACKUP_DATAFILE_SUMMARY to name a few Q.What Is The Difference Between Obsolete Rman Backups And Expired Rman Backups? Ans: The term obsolete does not mean the same as expired. In short obsolete means “not needed ” whereas expired means “not found.” A status of “expired” means that the backup piece or backup set is not found in the backup destination. A status of “obsolete” means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed. Q.When Do You Use Crosscheck Command? Ans: Crosscheck will be useful to check whether the catalog information is intact with OS level information. Q.If Some Of The Blocks Are Corrupted Due To System Crash, How Will You Recover? Ans: Using RMAN BLOCK RECOVER command Q.You Have Taken A Manual Backup Of A Datafile Using O/s. How Rman Will Know About It? Or How To Put Manual/user-managed Backup In Rman (recovery Catalog)? Ans: By using catalog command.You have to catalog that manual backup in RMAN’s repository by command RMAN> catalog datafilecopy ‘/DB01/BACKUP/users01.dbf’; or RMAN> CATALOG START WITH ‘/tmp/backup.ctl’; Restrictions: > Accessible on disk A complete image copy of a single file Q.Where Rman Keeps Information Of Backups If You Are Using Rman Without Catalog? Ans: RMAN keeps information of backups in the control file. Q.What Is The Diff Between Catalog And Nocatalog? Ans: The difference is only who maintains the backup records like when is the last successful backup incremental differential etc. In CATALOG mode another database (TARGET database) stores all the information. In NO CATALOG mode controlfile of Target database is responsible. Q.How Do You See Information About Backups In Rman? Ans: RMAN> List Backup; Q.How Do You Monitor Rman Backup Job Status? Ans: Use this SQL to check SQL> SELECT sid totalwork sofar FROM v$session_longops WHERE sid 153; Here give SID when back start it will show SID Q.How Rman Improves Backup Time? Ans: RMAN backup time consumption is very less than compared to regular online backup as RMAN copies only modified blocks. Q.What Is The Difference Between Cumulative Incremental And Differential Incremental Backups? Ans: Differential backup: This is the default type of incremental backup which backs up all blocks changed after the most recent backup at level n or lower. Cumulative backup: Backup all blocks changed after the most recent backup at level n-1 or lower. Q.How Do You Enable The Autobackup For The Controlfile Using Rman? Ans: issue command at rman prompt….. RMAN> configure controlfile autobackup on; also we can configure controlfile backup format…… RMAN> configure controlfile autobackup format for device type disk to ‘$HOME/BACKUP/RMAN/ F.bkp’; — $HOME/BACKUP/RMAN/ this can be any desired location. Contact for more On Oracle BPM Online Training
Continue reading
Oracle Data Guard Interview Questions
Q1. What is a Dataguard? Ans: Oracle Dataguard is a disaster recovery solution from Oracle Corporation that has been utilized in the industry extensively at times of Primary site failure, failover, switchover scenarios. Q2.What are the uses of Oracle Data Guard? Ans: a) Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. b) Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. c) With Data Guard, administrators can optionally improve production database performance by offloading resource-intensive backup and reporting operations to standby systems. Q3. What is Redo Transport Services? Ans: It control the automated transfer of redo data from the production database to one or more archival destinations. Redo transport services perform the following tasks: a) Transmit redo data from the primary system to the standby systems in the configuration. b) Manage the process of resolving any gaps in the archived redo log files due to a network failure. c) Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the primary database or another standby database. Q4. What is apply services? Ans: Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database. It also allows read-only access to the data. Q5. What is difference between physical and standby databases? Ans: The main difference between physical and logical standby databases is the manner in which apply services apply the archived redo data: a) For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of an Oracle database. b) For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into SQL statements and then executes the generated SQL statements on the logical standby database. Q6. What is Data Guard Broker? Ans: Data guard Broker manage primary and standby databases using the SQL command-line interfaces or the Data Guard broker interfaces, including a command-line interface (DGMGRL) and a graphical user interface that is integrated in Oracle Enterprise Manager. It can be used to perform: a) Create and enable Data Guard configurations, including setting up redo transport services and apply services b) Manage an entire Data Guard configuration from any system in the configuration c) Manage and monitor Data Guard configurations that contain Oracle RAC primary or standby databases d) Simplify switchovers and failovers by allowing you to invoke them using either a single key click in Oracle Enterprise Manager or a single command in the DGMGRL command-line interface. e) Enable fast-start failover to fail over automatically when the primary database becomes unavailable. When fast-start failover is enabled, the Data Guard broker determines if a failover is necessary and initiates the failover to the specified target standby database automatically, with no need for DBA intervention. Q7. What are the Data guard Protection modes and summarize each? Maximum availability : This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database. Maximum performance : This is the default protection mode. It provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Maximum protection : This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions. Here are some additional Oracle Data Guard Interview Questions for newer versions of Oracle: Q8. If you didn't have access to the standby database and you wanted to find out what error has occurred in a data guard configuration, what view would you check in the primary database to check the error message? Ans: You can check the v$dataguard_status view. Select message from v$dataguard_status; Q9. In Oracle 11g, what command in RMAN can you use to create the standby database while the target database is active? Ans: Oracle 11g has made it extremely simple to set up a standby database environment because Recovery Manager (RMAN) now supports the ability to clone the existing primary database directly to the intended standby database site over the network via the DUPLICATE DATABASE command set while the target database is active. RMAN automatically generates a conversion script in memory on the primary site and uses that script to manage the cloning operation on the standby site with virtually no DBA intervention required. You can execute this in a run block in RMAN: duplicate target database for standby dorecover from active database; Q10. What additional standby database mode does Oracle 11g offer? Ans: Oracle 11g has introduced the Oracle Snapshot Standby Database. In Snapshot Standby Database a physical standby database can easily open in read-write mode and again you can convert it back to the physical standby database. This is suitable for test and development environments and also maintains protection by continuing to receive data from the production database and archiving it for later use. Q11. In Oracle 11g how can speed up backups on the standby database? Ans: In Oracle 11g, block change tracking is now supported in the standby database. Q12. With the availability of Active Data Guard, what role does SQL Apply (logical standby) continue to play? Ans: Use SQL Apply for the following requirements: (a) when you require read-write access to a synchronized standby database but do not modify primary data, (b) when you wish to add local tables to the standby database that can also be updated, or (c) when you wish to create additional indexes to optimize read performance. The ability to handle local writes makes SQL Apply better suited to packaged reporting applications that often require write access to local tables that exist only at the target database. SQL Apply also provides rolling upgrade capability for patchsets and major database releases. This rolling upgrade functionality can also be used by physical standby databases beginning with Oracle 11g using Transient Logical Standby. Q13. Why would I use Active Data Guard and not simply use SQL Apply (logical standby) that is included with Data Guard 11g? Ans: If read-only access satisfies the requirement - Active Data Guard is a closer fit for the requirement, and therefore is much easier to implement than any other approach. Active Data Guard supports all datatypes and is very simple to implement. An Active Data Guard replica can also easily support additional uses - offloading backups from the primary database, serve as an open read-write test system during off-peak hours (Snapshot Standby), and provide an exact copy of the production database for disaster recovery - fully utilizing standby servers, storage and software while in standby role. Q14. Why do I need the Oracle 11g Active Data Guard Option? Ans: Previous capabilities did not allow Redo Apply to be active while a physical standby database was open read-only, and did not enable RMAN block change tracking on the standby database. This resulted in (a) read-only access to data that was frozen as of the time that the standby database was opened read-only, (b) failover and switchover operations that could take longer to complete due to the backlog of redo data that would need to be applied, and (c) incremental backups that could take up to 20x longer to complete - even on a database with a moderate rate of change. Previous capabilities are still included with Oracle Data Guard 11g, no additional license is required to use previous capabilities. Q15. If you wanted to upgrade your current 10g physical standby data guard configuration to 11g, can you upgrade the standby to 11g first then upgrade the primary ? Ans: Yes, in Oracle 11g, you can temporarily convert the physical standby database to a logical standby database to perform a rolling upgrade. When you issue the convert command you need to keep the identity: alter database recover logical standby keep identity; Q16. If you have a low-bandwidth WAN network, what can you do to improve the Oracle 11g data guard configuration in a GAP detected situation? Ans: Oracle 11g introduces the capability to compress redo log data as it transports over the network to the standby database. It can be enabled using the compression parameter. Compression becomes enabled only when a gap exists and the standby database needs to catch up to the primary database. alter system set log_archive_dest_1='SERVICE=DBA11GDR COMPRESSION=ENABLE'; Q17. In an Oracle 11g Logical Standby Data Guard configuration, how can you tell the dbms_scheduler to only run jobs in primary database? Ans: Oracle 11g, logical standby now provides support for DBMS_SCHEDULER. It is capable of running jobs in both primary and logical standby database. You can use the DBMS_SCHEDULER.SET_ATTRIBUTE procedure to set the database_role. You can specify that the jobs can run only when operating in that particular database role. Q18. How can you control when an archive log can be deleted in the standby database in oracle 11g ? Ans: In Oracle 11g, you can control it by using the log_auto_delete initialization parameter. The log_auto_delete parameter must be coupled with the log_auto_del_retention_target parameter to specify the number of minutes an archivelog is maintained until it is purged. Default is 24 hours. For archivelog retention to be effective, the log_auto_delete parameter must be set to true. Q19. Can Oracle Data Guard be used with Standard Edition of Oracle ? Ans: Yes and No. The automated features of Data Guard are not available in the standard edition of Oracle. You can still however, perform log shipping manually and write scripts to manually perform the steps. If you are on unix platform, you can write shell scripts that identify the logs and then use the scp or sftp command to ship it to the standby server. Then on the standby server, identify which logs have not been applied and apply/recover them maually and remove them once applied. Q20. What is the difference between Active Dataguard, and the Logical Standby implementation of 10g dataguard? Ans: Active dataguard is mostly about the physical standby. Use physical standby for testing without compromising protection of the production system. You can open the physical standby read/write - do some destructive things in it (drop tables, change data, whatever - run a test - perhaps with real application testing). While this is happening, redo is still streaming from production, if production fails - you are covered. Use physical standby for reporting while in managed recovery mode. Since physical standby supports all of the datatypes - and logical standby does not (11g added broader support, but not 100%) - there are times when logical standby isn’t sufficient. It also permits fast incremental backups when offloading backups to a physical standby database. Q21. Can Oracle's Data Guard be used on Standard Edition, and if so how? How can you test that the standby database is in sync? Ans: Oracle's Data Guard technology is a layer of software and automation built on top of the standby database facility. In Oracle Standard Edition it is possible to be a standby database, and update it *manually*. Roughly, put your production database in archivelog mode. Create a hotbackup of the database and move it to the standby machine. Then create a standby controlfile on the production machine, and ship that file, along with all the archived redolog files to the standby server. Once you have all these files assembled, place them in their proper locations, recover the standby database, and you're ready to roll. From this point on, you must manually ship, and manually apply those archived redologs to stay in sync with production. To test your standby database, make a change to a table on the production server, and commit the change. Then manually switch a logfile so those changes are archived. Manually ship the newest archived redolog file, and manually apply it on the standby database. Then open your standby database in read-only mode, and select from your changed table to verify those changes are available. Once you're done, shutdown your standby and startup again in standby mode. contact for more on Dataguard Online Training
Continue reading
Oracle Data Integrator Interview Questions
Q.What Is Oracle Data Integrator (odi)? Ans: Oracle acquired Sunopsis in 2006 and with it “Sunopsis Data Integrator”. Oracle Data Integrator (ODI) is an E-LT (Extract, Load and Transform) tool used for high-speed data movement between disparate systems. The latest version, Oracle Data Integrator Enterprise Edition (ODI-EE) brings together “Oracle Data Integrator” and “Oracle Warehouse Builder” as separate components of a single product with a single licence. Q.What Is E-lt? Ans: E-LT is an innovative approach to extracting, loading and Transforming data. Typically ETL application vendors have relied on costly heavyweight , mid-tier server to perform the transformations required when moving large volumes of data around the enterprise. ODI delivers unique next-generation, Extract Load and Transform (E-LT) technology that improves performance and reduces data integration costs, even across heterogeneous systems by pushing the processing required down to the typically large and powerful database servers already in place within the enterprise. Q.What Components Make Up Oracle Data Integrator? Ans: “Oracle Data Integrator” comprises of: 1) Oracle Data Integrator + Topology Manager + Designer + Operator + Agent 2) Oracle Data Quality for Data Integrator 3) Oracle Data Profiling Q.What Is Oracle Data Integration Suite? Ans: Oracle data integration suite is a set of data management applications for building, deploying, and managing enterprise data integration solutions: Oracle Data Integrator Enterprise Edition Oracle Data Relationship Management Oracle Service Bus (limited use) Oracle BPEL (limited use) Oracle WebLogic Server (limited use) Additional product options are: Oracle Goldengate Oracle Data Quality for Oracle Data Integrator (Trillium-based DQ) Oracle Data Profiling (Trillium based Data Profiling) ODSI (the former Aqualogic Data Services Platform) Q.What Systems Can Odi Extract And Load Data Into? Ans: ODI brings true heterogeneous connectivity out-of-the-box, it can connect natively to Oracle, Sybase, MS SQL Server, MySQL, LDAP, DB2, PostgreSQL, Netezza. It can also connect to any data source supporting JDBC, its possible even to use the Oracle BI Server as a data source using the jdbc driver that ships with BI Publisher. Q.What Are Knowledge Modules? Ans: Knowledge Modules form the basis of ‘plug-ins’ that allow ODI to generate the relevant execution code , across technologies , to perform tasks in one of six areas, the six types of knowledge module consist of: Reverse-engineering knowledge modules are used for reading the table and other object metadata from source databases Journalizing knowledge modules record the new and changed data within either a single table or view or a consistent set of tables or views Loading knowledge modules are used for efficient extraction of data from source databases for loading into a staging area (database-specific bulk unload utilities can be used where available) Check knowledge modules are used for detecting errors in source data Integration knowledge modules are used for efficiently transforming data from staging area to the target tables, generating the optimized native SQL for the given database Service knowledge modules provide the ability to expose data as Web services ODI ships with many knowledge modules out of the box, these are also extendable, they can modified within the ODI Designer module. Q.How Do ‘contexts’ Work In Odi? Ans: ODI offers a unique design approach through use of Contexts and Logical schemas. Imagine a development team, within the ODI Topology manager a senior developer can define the system architecture, connections, databases, data servers (tables etc) and so forth. These objects are linked through contexts to ‘logical’ architecture objects that are then used by other developers to simply create interfaces using these logical objects, at run-time, on specification of a context within which to execute the interfaces, ODI will use the correct physical connections, databases + tables (source + target) linked the logical objects being used in those interfaces as defined within the environment Topology. Q.Does My Odi Infrastructure Require An Oracle Database? Ans: No, the ODI modular repositories (Master + and one of multiple Work repositories) can be installed on any database engine that supports ANSI ISO 89 syntax such as Oracle, Microsoft SQL Server, Sybase AS Enterprise, IBM DB2 UDB, IBM DB2/40. Q.Does Odi Support Web Services? Ans: Yes, ODI is ‘SOA’ enabled and its web services can be used in 3 ways: The Oracle Data Integrator Public Web Service, that lets you execute a scenario (a published package) from a web service call Data Services, which provide a web service over an ODI data store (i.e. a table, view or other data source registered in ODI) The ODIInvokeWebService tool that you can add to a package to request a response from a web service. Q.What Is The Odi Console? Ans: ODI console is a web based navigator to access the Designer, Operator and Topology components through browser. Q.Suppose I Having 6 Interfaces And Running The Interface 3 Rd One Failed How To Run Remaining Interfaces? Ans: If you are running Sequential load it will stop the other interfaces. so goto operator and right click on filed interface and click on restart. If you are running all the interfaces are parallel only one interface will fail and other interfaces will finish. Q.What Is Load Plans And Types Of Load Plans? Ans: Load plan is a process to run or execute multiple scenarios as a Sequential or parallel or conditional based execution of your scenarios. And same we can call three types of load plans , Sequential, parallel and Condition based load plans. Q.What Is Profile In Odi? Ans: profile is a set of objective wise privileges. we can assign this profiles to the users. Users will get the privileges from profile. Q.How To Write The Sub-queries In Odi? Ans: Using Yellow interface and sub queries option we can create sub queries in ODI. or Using VIEW we can go for sub queries Or Using ODI Procedure we can call direct database queries in ODI. Q.How To Remove The Duplicate In Odi? Ans: Use DISTINCT in IKM level. it will remove the duplicate rows while loading into target. Q.Suppose Having Unique And Duplicate But I Want To Load Unique Record One Table And Duplicates One Table? Ans: Create two interfaces or once procedure and use two queries one for Unique values and one for duplicate values. Q.How To Implement Data Validations? Ans: Use Filters & Mapping Area AND Data Quality related to constraints use CKM Flowcontrol. Q.How To Handle Exceptions? Ans: Exceptions In packages advanced tab and load plan exception tab we can handle exceptions. Q.In The Package One Interface Got Failed How To Know Which Interface Got Failed If We No Access To Operator? Ans: Make it mail alert or check into SNP_SESS_LOg tables for session log details. Q.How To Implement The Logic In Procedures If The Source Side Data Deleted That Will Reflect The Target Side Table? Ans: User this query on Command on target Delete from Target_table where not exists (Select ‘X’ From Source_table Where Source_table.ID=Target_table.ID). Q.If The Source Have Total 15 Records With 2 Records Are Updated And 3 Records Are Newly Inserted At The Target Side We Have To Load The Newly Changed And Inserted Records Ans: Use IKM Incremental Update Knowledge Module for Both Insert n Update operations. Q.Can We Implement Package In Package? Ans: Yes, we can call one package into other package. Q.How To Load The Data With One Flat File And One Rdbms Table Using Joins? Ans: Drag and drop both File and table into source area and join as in Staging area. Q.If The Source And Target Are Oracle Technology Tell Me The Process To Achieve This Requirement(interfaces, Kms, Models) Ans: Use LKM-SQL to SQL or LKM-SQL to Oracle , IKM Oracle Incremental update or Control append. Q.How To Reverse Engineer Views(how To Load The Data From Views)? Ans: In Models Go to Reverse engineering tab and select Reverse engineering object as VIEW. Q.Is Odi Used By Oracle In Their Products? Ans: Yes there are many Oracle products that utilise ODI, but here are just a few: Oracle Application Integration Architecture (AIA) Oracle Agile products Oracle Hyperion Financial Management Oracle Hyperion Planning Oracle Fusion Governance, Risk & Compliance Oracle Business Activity Monitoring Oracle BI Applications also uses ODI as its core ETL tool in place of Informatica , but only for one release of OBIA and when using a certain source system. Q.Explain What Is Odi?why Is It Different From The Other Etl Tools. Ans: ODI stands for Oracle Data Integrator. It is different from another ETL tool in a way that it uses E-LT approach as opposed to ETL approach. This approach eliminates the need of the exclusive Transformation Server between the Source and Target Data server. The power of the target data server can be used to transform the data. i.e. The target data server acts as staging area in addition to its role of target databasel. While loading the data in the target database (from staging area) the transformation logic is implemented. Also, the use of appropriate CKM (Check Knowldege Module) can be made while doing this to implement data quality requirement. Q.How Will You Bring In The Different Source Data Into Odi? Ans: you will have to create dataservers in the topology manager for the different sources that you want. Q.How Will You Bulk Load Data? Ans: In Odi there are IKM that are designed for bulk loading of data. Q.How Will You Bring In Files From Remote Locations? Ans: We will invoke the Service knowledge module in ODI,this will help us to accesses data thought a web service. Q.How Will You Handle Data Quality In Odi? Ans: There are two ways of handling data quality in Odi….the first method deals with handling the incorrect data using the CKM…the second method uses Oracle data quality tool(this is for advanced quality options) Q.What Is A Procedure And How To Write The Procedures In Odi? Ans: A Procedure is a reusable component that allows you to group actions that do not fit in the Interface framework. (That is load a target datastore from one or more sources). A Procedure is a sequence of commands launched on logical schemas. It has a group of associated options. These options parameterize whether or not a command should be executed as well as the code of the commands. Contact for more on ODI Online Training
Continue reading
Oracle Golden Gate Interview Questions
Q.What are some of the key features of GoldenGate 12c? Ans:The following are some of the more interesting features of Oracle GoldenGate 12c: Support for Multitenant Database Coordinated Replicat Integrated Replicat Mode Use of Credential store Use of Wallet and master key Trigger-less DDL replication Automatically adjusts threads when RAC node failure/start Supports RAC PDML Distributed transaction RMAN Support for mined archive logs Q.What are the installation options available in OGG 12c? Ans: You can install Oracle GoldenGate 12c using in 2 ways: Interactive Installation with OUI – Graphical interface Silent Installation with OUI – Command Interface Q.What is a Credential Store in OGG 12c? Ans: OGG Crendential Store manages Encrypted Passwords and USERIDs that are used to interact with the local database and Associate them with an Alias. Instead of specifying actual USERID and Password in a command or a parameter file, you can use an alias. The Credential Store is implemented as an autologin wallet within the Oracle Credential Store Framework (CSF). Q.How to configure Credentail Store in OGG 12c? Ans: Steps to configure Oracle Credential Store are as follows: By Default Credential Store is is located under “dircrd” directory. If you want to specify a different location use can specidy “CREDENTIALSTORELOCATION” parameter in GLOBALS file. Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD Goto OGG home and connect to GGSCI. cd $OGG_HOME ./ggsci GGSCI> Q.What command is used to create the credential store? Ans: ADD CREDENTIALSTORE How do you add credentials to the credential store? ALTER CREDENTIALSTORE ADD USER userid, Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD oracle ALIAS extorcl DOMAIN OracleGoldenGate Q.How do you retrieve information from the Oracle Credential Store? Ans: GGSCI> INFO CREDENTIALSTORE OR GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate Q.What are the different data encyption methods available in OGG 12c? Ans: In OGG 12c you can encrypt data with the following 2 methods: Encrypt Data with Master Key and Wallet Encrypt Data with ENCKEYS Q.How do you enable Oracle GoldenGate for Oracle database 11.2.0.4? Ans: The database services required to support Oracle GoldenGate capture and apply must be enabled explicitly for an Oracle 11.2.0.4 database. This is required for all modes of Extract and Replicat. To enable Oracle GoldenGate, set the following database initialization parameter. All instances in Oracle RAC must have the same setting. ENABLE_GOLDENGATE_REPLICATION=true Q.How does the Replicat works in a Coordinated Mode? Ans: In a Coordinated Mode Replicat operates as follows: Reads the Oracle GoldenGate trail. Performs data filtering, mapping, and conversion. Constructs SQL statements that represent source database DML or DDL transactions (in committed order). Applies the SQL to the target through the SQL interface that is supported for the given target database, such as ODBC or the native database interface. Q.What is the difference between Classic and Coordinated Replicat? Ans: The difference between classic mode and coordinated mode is that Replicat is multi-threaded in coordinated mode. Within a single Replicat instance, multiple threads read the trail independently and apply transactions in parallel. Each thread handles all of the filtering, mapping, conversion, SQL construction, and error handling for its assigned workload. A coordinator thread coordinates the transactions across threads to account for dependencies among the threads. Q.How do you create a COORDINATED REPLICATE in OGG 12c? Ans: You can create the COORDINATED REPLICATE with the following OGG Command: ADD REPLICAT rfin, COORDINATED MAXTHREADS 50, EXTTRAIL dirdat/et Q.If have created a Replicat process in OGG 12c and forgot to specify DISCARDFILE parameter. What will happen? Ans: Starting with OGG 12c, if you don’t specify a DISCARDFILE OGG process now generates a dicard file with default values whenever a process is started with START command through GGSCI. Q.Is it possible to start OGG EXTRACT at a specific CSN? Ans: Yes, Starting with OGG 12c you can now start Extract at a specific CSN in the transaction log or trail. Example: START EXTRACT fin ATCSN 12345 START EXTRACT finance AFTERCSN 67890 contact for more on Oracle Golden Gate Online Training
Continue reading
Oracle SCM Interview Questions
Q.What is inventory control? Ans: Inventory control is the process of reducing inventory costs while remaining responsive to customer demands. By this definition a store would want to lower its acquisition, carrying ordering and stock-out costs to their lowest possible levels. However a store would need to have enough inventories to meet any needs of its customers. Q.What does inventory affect in a store? Ans: Inventory levels and their values can affect the income of the store, the amount of taxes paid, and the total stocking cost. Q.How can the value of inventory be determined? Ans: The value can be found using four methods in inventory control. Standard Cost: The specific cost in which each item's cost is added together for the inventory's value. Average Cost: The weighted average of the costs for a period to determine value. FIFO Cost: First In First Out. In this method value is measured using the latest costs of goods while working towards the beginning of the period until all goods in inventory are valued. LIFO Cost: Last In First Out. In this method the costs of gods at the beginning of the period are used to determine the inventory's value much like FIFO. Q.What are the important considerations in inventory control? Ans: For inventory control to work at its best a store must consider the costs of acquisition, carrying, ordering, and stock-out. The store must also look at its reordering system, its budgeting for inventory, insurance and forecasted demand. Q.Will the changes made in a Workday calendar comes into effect after saving? Ans: No. The changes made into a Workday calendar will come into effect only after Building the Calendar. Q.How different weekly offs can be assigned to different shifts without doing it manually? Ans: Suppose Monday is the Calendar Start day and we want Thursday as weekly off for 1st shift and Friday for 2nd shift, enter the ‘Workday Pattern’ for 1st as 3 On 1 Off and 3 On 0 Off. This means that Monday, Tuesday and Wednesday are working days, Thursday is off and Friday, Saturday and Sunday are again working days, for any week for 1st shift. And for 2nd shift enter the Workday Pattern as 4 on 1 off and 2 on 0 off. Q.What is an Organization? Ans: An Organization is an inventory location with its own Set of Books, Costing Method, Workday Calendar and List of items. Q.What is a Sub inventory? Ans: A Sub inventory is used when two physical inventory locations share the same Set of Books, Costing Method, Workday Calendar, but different list of items. Q.How will you that a location is available for transaction in all Organizations? Ans: While defining the Location, don’t attach an Organization to it so that the location can be used for any organization Q.What is the difference between Internal and External Organizations? Ans: The difference between Internal and External Organization is that we cannot assign people to an External Organization. Examples of External Organizations: Workers Compensation Insurance Carriers. Organizations that are recipients of third party payments from Employee’s benefits. Q.What is an Item Master Organization? Ans: The organization in which the items are defined is called the Item Master Organization. Child Organizations (other organizations) refer to the Item Master for the item definition. There is no functional or technical difference between the Item Master Organization and other Organizations. However, for simplicity, it is recommended to limit the item master to just for an item defining organization. Q.Is it possible to have different costing methods for different organizations under the same Item Master Organization? Ans: Yes. Even we can have dummy organizations for using different costing method for different costing method for different items within an organization. Q.Can we use Average Costing in an organization where WIP is also installed? Ans: No. We can’t use Average Costing if WIP is installed. Q.What shall be the Costing Organization of an Org? Ans: If individual organization wants to have control over its own cost, we will assign the current organization itself as the Costing organization. If that is not the case, we can assign the Item Master Organization or any other organization as the costing organization. Q.What are the Inventory material transactions interface tables. Ans: Material Transactions Interface Tables are: mtl_transactions_interface, mtl_transactions_lots_interface, mtl_serial_numbers_interface, mtl_interface_errors Q.In which table the inventory material transactions history data is maintained after running the interface program. Ans: Material transactions data are maintained in “mtl_material_transactions” table. Q.In which table the onhand quantities of the items exist? Ans: On-Hand quantities of the items are stored in “mtl_onhand_quantities” table. Q.In which table the subinventories are stored? Ans: Sub inventories are stored in “mtl_secondary_inventories” table. Q.In which table the locators are stored? Ans: Location information is stored in “mtl_item_locations” table. Q.What is the use of specifying alternate items in Order Management? Ans: System facilitates order entry user to choose between items which are set as Alternates based on attributes such as ATP etc. Hence alternate items can be booked if original item is not available as per customer timelines. Q.What are Back-to-Back orders and what are the setups involved? Ans: Back to Back orders are orders for which items booked in Sale order is not available in Inventory and system creates a purchase requisition and tracks the item through creation of Purchase order from the requisition and finally when PO receipt is made for the item, the receipt quantity is reserved against the sale order. Setups include definition of item with attributes such as ‘Built in WIP’ and ‘Assemble to order’ set to Yes. Sourcing rule needs to be defined for the item and sourcing rule should be mapped to MRP: Assignment set. Q.What are ATO and PTO items? ATO and PTO are types of Item which are used in OM and Configurator mainly. Ans: ATO or Assemble to Order items are typically items that are built as per the customer’s requirement. Hence ATO model is entered in Sales order and end items are chosen from the configurator window. The workflow of the item creates a Discrete job and chosen item is built in WIP. Once the Discrete job is complete, the item is available in OM for picking and shipping. PTO or Pick to order items are items which are picked from inventory based on customer requirements and then picked and shipped. Q.What are the typical reasons for a line to get backordered during Pick Release? Ans: Primary reasons for line to be backordered are Item is not available in inventory Inventory period is closed Holds are placed against the order or order line. Q.What is the purpose of Trips and Stops? Ans: A trip is an instance of a specific freight carrier departing from a particular location containing deliveries. A trip is carrier specific and contains at least two stops such as a stop to pick up goods and another stop to drop off goods, and may include intermediate stops. Q.How to setup Drop shipment cycle in OM? Ans: Oracle Order Management and Oracle Purchasing integrate to provide Drop Shipments. Drop Shipments are orders for items that your supplier ships directly to the customer either because you don't stock or currently don't have the items in inventory, or because it's more cost effective for the supplier to ship the item to the customer directly. In the sale order, specify the Source type as External. Purchase Release program should be run and post this program, requisition import program should be run. Q.What is the purpose of Interface Trip Stop? Ans: Interface Trip Stop creates the sales order issue transaction and thereby depletes the inventory to the sale order shipped quantity. As part of ITS, COGS account gets generated. Q.What is RMA and what are the scenarios when RMA cycle would be used? Ans: If I have shipped an order via Order Management to a wrong customer or the wrong item/quantity has been shipped, then I will do an RMA transaction in Inventory to bring the item back. This will generate a credit memo in AR. If the customer finds that item is faulty or defective, then he returns the shipment to us In this case too, we will create an RMA in Inventory and receive the item back. This again creates a credit memo. Q.How to setup credit hold in Order Management? Credit hold setups include setting up Ans: Customer site level – Credit check must be enabled, Amount and currency must be specified Payment term – Credit check must be enabled Credit check rule – Credit check rule must be defined. Order type – Credit check Rule must be mapped as required Q.How to setup quantity discounts in Price lists? Ans: Quantity discounts are handled by specifying Price breaks by giving quantity and corresponding price applicable. Q.What is the purpose of scheduling a sale order? Ans: Scheduling a sales order ensures that line is available for Picking and further transactions applicable. Also, Scheduling looks at sourcing rules to determine the source of the item specified in order line. Scheduling honors Promise date and Latest Acceptable date whichever is applicable as per setups. Q.What are processing constraints in OM? Ans: Processing constraints ensure that user doesn’t violate any business process or system defined process by putting checks on various actions performed by user. Typically actions such as cancelling orders are governed by processing constraints. contact for more on Oracle SCM Online Training
Continue reading
Oracle Weblogic Interview Questions
Q.How do I provide user credentials for starting a server? Ans: When you create a domain, the Configuration Wizard prompts you to provide the username and password for an initial administrative user. If you create the domain in development mode, the wizard saves the username and encrypted password in a boot identity file. A WebLogic Server instance can refer to a boot identity file during its startup process. If a server instance does not find such a file, it prompts you to enter credentials. If you create a domain in production mode, or if you want to change user credentials in an existing boot identity file, you can create a new boot identity file. Q.Can I start a Managed Server if the Administration Server is unavailable? Ans: By default, if a Managed Server is unable to connect to the specified Administration Server during startup, it can retrieve its configuration by reading a configuration file and other files directly. You cannot change the server's configuration until the Administration Server is available. A Managed Server that starts in this way is running in Managed Server Independence mode. Q.What is the function of T3 in WebLogic Server? Ans: T3 provides a framework for WebLogic Server messages that support for enhancements. These enhancements include abbreviations and features, such as object replacement, that work in the context of WebLogic Server clusters and HTTP and other product tunneling. T3 predates Java Object Serialization and RMI, while closely tracking and leveraging these specifications. T3 is a superset of Java Object. Serialization or RMI; anything you can do in Java Object Serialization and RMI can be done over T3. T3 is mandated between WebLogic Servers and between programmatic clients and a WebLogic Server cluster. HTTP and IIOP are optional protocols that can be used to communicate between other processes and WebLogic Server. It depends on what you want to do. For example, when you want to communicate between a browser and WebLogic Server-use HTTP, or an ORB and WebLogic Server-IIOP. Q.How do you set the classpath? Ans: WebLogic Server installs the following script that you can use to set the classpath that a server requires: WL_HOME\server\bin\setWLSEnv.cmd (on Windows) WL_HOME/server/bin/setWLSEnv.sh (on UNIX) Q.How do stubs work in a WebLogic Server cluster? Ans: Clients that connect to a WebLogic Server cluster and look up a clustered object obtain a replica-aware stub for the object. This stub contains the list of available server instances that host implementations of the object. The stub also contains the load balancing logic for distributing the load among its host servers. What happens when a failure occurs and the stub cannot connect to a WebLogic Server instance? When the failure occurs, the stub removes the failed server instance from its list. If there are no servers left in its list, the stubb uses DNS again to find a running server and obtain a current list of running instances. Also, the stub periodically refreshes its list of available server instances in the cluster; this allows the stub to take advantage of new servers as they are added to the cluster. Q.How does a server know when another server is unavailable? Ans: WebLogic Server uses two mechanisms to determine if a given server instance is unavailable. Each WebLogic Server instance in a cluster uses multicast to broadcast regular "heartbeat" messages that advertise its availability. By monitoring heartbeat messages, server instances in a cluster determine when a server instance has failed. The other server instances will drop a server instance from the cluster, if they do not receive three consecutive heartbeats from that server instance WebLogic Server also monitors socket errors to determine the availability of a server instance. For example, if server instance A has an open socket to server instance B, and the socket unexpectedly closes, server A assumes that server B is offline. Q.How are notifications made when a server is added to a cluster? Ans: The WebLogic Server cluster broadcasts the availability of a new server instance each time a new instance joins the cluster. Cluster-aware stubs also periodically update their list of available server instances. Q.How do clients handle DNS requests to failed servers? Ans: If a server fails and DNS continues to send requests to the unavailable machine, this can waste bandwidth. For a Java client application, this problem occurs only during startup. WebLogic Server caches the DNS entries and removes the unavailable ones, to prevent the client from accessing a failed server twice. Failed servers can be more of a problem for browser-based clients, because they always use DNS. To avoid unnecessary DNS requests with browser-based clients, use a third-party load-balancer such as Resonate, BigIP, Alteon, and LocalDirector. These products mask multiple DNS addresses as a single address. They also provide more sophisticated load-balancing options than round-robin, and they keep track of failed servers to avoid routing unnecessary requests. Q.How many WebLogic Servers can I have on a multi-cpu machine? Ans: There are many possible configurations and each has its own advantages and disadvantages. BEA WebLogic Server has no built-in limit for the number of server instances that can reside in a cluster. Large, multi-processor servers such as Sun Microsystems, Inc. Sun Enterprise 10000, therefore, can host very large clusters or multiple clusters. In most cases, WebLogic Server clusters scale best when deployed with one WebLogic Server instance for every two CPUs. However, as with all capacity planning, you should test the actual deployment with your target web applications to determine the optimal number and distribution of server instances. Q.How can I set deployment order for applications? Ans: WebLogic Server allows you to select the load order for applications. WebLogic Server deploys server-level resources (first JDBC and then JMS) before deploying applications. Applications are deployed in this order: connectors, then EJBs, then Web Applications. If the application is an EAR, the individual components are loaded in the order in which they are declared in the application.xml deployment descriptor. Q.How do I increase WebLogic Server memory? Ans: Increase the allocation of Java heap memory for WebLogic Server. (Set the minimum and the maximum to the same size.) Start WebLogic Server with the -ms32m option to increase the allocation, as in this example: $ java ... -ms32m -mx32m ... This allocates 32 megabytes of Java heap memory to WebLogic Server, which improves performance and allows WebLogic Server to handle more simultaneous connections. You can increase this value if necessary. Q.What is TTL in Weblogic? Ans: The Multicast TTL(TTL-Time to Live) setting specifies the number of routers a multicast message can pass through before the packet can be discarded. To configure the multicast TTL for a cluster, you should change the Multicast TTL value in the WebLogic Server administration console. This sets the number of network hops a multicast message makes before the packet can be discarded. Q.What are the difference between Connection pool and data source? Ans: Connection Pool is physically connects to the Database. where as Data Source is a logical resource that can be used by developer or any other resource for accessing Connection of pools. DataSource can be associated with JNDI name that is used for looku p from any other client. Q.What is HTTP tunneling? How can we configure it on Weblogic? Ans: HTTP tunneling provides a way to simulate a stateful socket connection between WebLogic Server and a Java client when your only option is to use the HTTP protocol. It is generally used to tunnel through an HTTP port in a security firewall. HTTP is a stateless protocol, but WebLogic Server provides tunneling functionality to make the connection appear to be a regular T3Connection. Steps to configure Http tunneling. Login into the Admin Console, click on the server on which you want to enable he Http Tunneling feature Click on the Protocols tab ? General ? check the “Enable Tunneling” check box. Now you can communicate with the JVMs (Server Instances) using protocols other than t3 Q.Explain the use of HTTP ? Ans: HTTP is the protocol that is made used for the purpose of enabling communication between the WebLogic server and processes. Q.Explain the functionality of IIOP ? Ans: IIOP is a kind of protocol helpful in enabling the communication between WebLogic server and object request broker. Q.Explain the term clustering? Ans: Clustering is the process of grouping the servers together for accomplishing high percentage of scalability and availability. Q.What is the purpose of clustering? Ans: The major goal of performing the process of clustering is to make high scalability as well as availability of the servers possible. This process also helps in balancing the load in a proper manner and also accomplishes failover. Q.How can cluster communication occur? Ans: The communication through cluster is made possible by the multicast IP as well as port by the process of sending periodic messages which are normally called as heartbeat messages. Q.What is a Node Manager? Ans: Node manager is a utility or process running on a physical server that enables starting, stopping, suspending or restarting admin and managed server remotely. It is not associated with a domain, though can start any server that reside on same physical server. it is required if we use Admin console to start servers. There is two types of NM, java-based Script based. Q.Can a WebLogic Server (WLS) admin server running on a 32-bit JDK be configured with a managed server running on a 64-bit JDK? Ans: This is not a supported Oracle WebLogic Server configuration. This configuration will cause failures of cluster communication between the 32- and 64-bit JVMs. Oracle WebLogic Server supports only homogeneous domain configurations. It is recommended to have all WLS instances (admin and managed servers) to be on the same JDK level and also the same WLS product version. Q.How can we define in weblogic configuration how many concurrent users are allowed or can be allowed at a time to a particular application? Ans: If to each user you assign a session, then you can control the max number of sessions in your webapp weblogic descriptor, for example adding the following constraint : 12 It's more effective (if you mean 1 user = 1session) than limit ing the number of requests by work managers. Another way, when you can't predict the size of sessions and the number of users, is to adjust memory overloading parameters and set : weblogic.management.configuration.WebAppContainerMBean.OverloadProtectionEnabled. Another way is to define a work manager which will set the limit on number of threads that can access the application which will generally set the limit on number of users. Q.how can we tell how may threads are being used in a weblogic at a time? Ans: Capacity of threads is managed by WebLogic through work managers. By default, just one exists : default with unllimited number of threads (!!!). If we need to exactly find the number of threads being processed by an application ,check the servers/monitoring/Threads tab. Q.What is weblogic Singleton service? Ans: A singleton service is a service running on a managed server that is available on only o ne member of a cluster at a time Q.What is the difference between -Dweblogic and setting values in weblogic console? Ans: When you use -Dweblogic.XXX option you can ovverride console configuration Q.How to disable admin port in weblogic without weblogic console? Ans: edit the config.xml in Domain/Config location find this node and change the value to 'false' true Q.What Is a WebLogic Server Cluster? Ans: A WebLogic Server cluster consists of multiple WebLogic Server server instances running simultaneously and working together to provide increased scalability and reliability. Q.What Are Dynamic Clusters? Ans: Dynamic clusters consist of server instances that can be dynamically scaled up to meet the resource needs of your application. A dynamic cluster uses a single server template to define configuration for a specified number of generated (dynamic) server instances. Q.What Are the Benefits of Clustering? Ans: Scalability The capacity of an application deployed on a WebLogic Server cluster can be increased dynamically to meet demand. You can add server instances to a cluster without interruption of service—the application continues to run without impact to clients and end users. High-Availability In a WebLogic Server cluster, application processing can continue when a server instance fails. You "cluster" application components by deploying them on multiple server instances in the cluster—so, if a server instance on which a component is running fails, another server instance on which that component is deployed can continue application processing. Q.What is a Domain? Ans: A domain is an interrelated set of WebLogic Server resources that are managed as a unit. A domain includes one or more WebLogic Server instances, which can be clustered, non-clustered, or a combination of clustered and non-clustered instances. A domain can include multiple clusters. A domain also contains the application components deployed in the domain, and the resources and services required by those application components and the server instances in the domain Q.What is multicast storm? Ans: If server instances in a cluster do not process incoming messages on a timely basis, increased network traffic, including negative acknowledgement (NAK) messages and heartbeat re-transmissions, can result. The repeated transmission of multicast packets on a network is referred to as a multicast storm Q.How to Find the Weblogic bit version? Ans: Weblogic is coming as a GENERIC distribution. It is a Java program running on top of a VM so there is no need for Weblogic -64bit or Weblogic -32bit versions. On the other hand, the JVM running Weblogic must be either 32bit or 64bit depending on the architecture. Q.Why we need Weblogic Inactive Connection Timeout? Ans: A leaked connection is a connection that was not properly returned to the connection pool in the data source. To automatically recover leaked connections, you can specify a value for Inactive Connection Timeout on the JDBC Data Source. ( Configuration: Connection Pool page in the Administration Console.) When you set a value for Inactive Connection Timeout, WebLogic Server forcibly returns a connection to the data source when there is no activity on a reserved connection for the number of seconds that you specify. When set to 0 (the default value), this feature is turned off. Q.How do I integrate JNI ( Native code ) code into weblogic?' Ans: The LD_LIBRARY_PATH environment variable should be set in the setWLSEnv.sh or the startWLS.sh scripts Q.What is Weblogic Thread Local Safety? Ans: WebLogic does not reset user set ThreadLocal variab les when the thread is returned back to the pool - the user is responsible for managing them. When such threads are reused, its likely they will interfere. You may run into memory leaks since the thread local reference isn't cleaned up. You can safely reset your thread locals prior to returning the thread back to the container. The ThreadLocal.remove() call should clean it up (ensure that its done in a finally block) Q.What is admin Role in weblogic deployment? Ans: The Administration Server for the domain manages the deployment process, communicating with the Managed Servers in the cluster throughout the process. Each Managed Server downloads components to be deployed, and initiates local deployment tasks . The deployment state is maintained in the relevant MBeans for the component being deployed. Q.What is the deployment process in weblogic? Ans: in WebLogic Server, applications are deployed in two phases. Before starting, WebLogic Server determines the availability of the Managed Servers in the cluster.First Phase of Deployment - During the first phase of deployment, application components are distributed to the target server instances, and the planned deployment is validated to ensure that the application components can be successfully deployed. During this phase, user requests to the application being deployed are not allowed. Failures encountered during the distribution and validation process will result in the deployment being aborted on all server instanc es—including those upon which the validation succeeded. Files that have been staged will not be removed; however, container-side changes performed during the preparation will be reverted. Second Phase of Deployment - After the application components have been distributed to targets and validated, they are fully deployed on the target server instances, and the deployed application is made available to clients. When a failure is encountered during the second phase of deployment, the server starts with one of the following behaviors: If a failure occurs while deploying to the target server instances, the server instance will start in ADMIN state. See "ADMIN State" in Managing Server Startup and Shutdown for Oracle WebLogic Server. Q.How do you differentiate between a server hang and server crash issue? Ans: When a Server crashes, the JAVA process no longer exists. When the Server is hung, it stops responding. We can use the weblogic.ADMIN utilty to ping the server. In case of a hang situation we can take multiple thread dumps and analyze the cause of hang. Q.What are deployment descriptors? Ans: It is a configuration file for web application or EJB application which is to be deployed to web or EJB container. Deployment descriptors describes the deployment settings of an application or module or component. It contains meta data describing the contents and structure of the enterprise beans, and runtime transaction and security information for EJB container. It directs a deployment tool to deploy a module or application with specific container options and describes specific configuration requirements that a deployer must resolve. Q.What is a shutdownhook? Ans: A shutdown hook is simply an initialized but unstarted thread. When the virtual machine begins its shutdown sequence it will start all registered shutdown hooks in some unspecified order and let them run concurrently. When all the hooks have finished it will then run all uninvoked finalizers if finalization-on-exit has been enabled. Finally, the virtual machine will halt. Q.What is maxexecutethread? Ans: Execute thread count, at the heart of WebLogic Server, is a pool of java threads (execute threads), which do all the work allowing for the parallel execution of tasks. By default this pool has 15 threads, but can be changed for performance tuning by setting the weblogic.system.executeThreadCount in weblogic.properties. Q.Can a WebLogic Server (WLS) admin server running on a 32-bit JDK be configured with a managed server running on a 64-bit JDK? Ans: This is not a supported Oracle WebLogic Server configuration. This configuration will cause failures of cluster communication between the 32- and 64-bit JVMs.Oracle WebLogic Server supports only homogeneous domain configurations. It is recommended to have all WLS instances (admin and managed servers) to be on the same JDK level and also the same WLS product version. Q.What is the difference between green threads and native threads? Ans: Green threads are the default threads provided by the JDK. Native threads are the threads that are provided by the native OS: Native threads can provide several advantages over the default green threads implementation, depending on your computing situation. Q.How can I increase the number of Posix reader threads? Ans: Modifying the weblogic.system.percentSocketReader is not having any effect on the number of Posix reader threads. In the command line which starts WebLogic for Unix: -Dweblogic.PosixSocketReaders In the command line which starts WebLogic for Windows: -Dweblogic.NTSocketReaders NOTE: PosixSocketReaders relates to the number of Posix reader threads. Q.What is a File Descriptor? Ans: A file descriptor is a handle represented by an unsigned integer used by a process to identify an open file. It is associated with a file object that includes informat ion such as the mode in which the file was opened, its position type, its initial type, and so on. This information is called the context of the file. Weblogic Interview Questions Weblogic Interview Questions and Answers Q.What are the available deployment tools in weblogic? Ans: WLS has several modes to deploy an application: from the administration console WLST Deployer tool wldeploy ant task management.deploy API by copying the module under applications directory if running in development mode. Q.What is a deployment unit? Ans: A deployment unit refers to a J2EE applic ation (an Enterprise Application or Web Application) or a standalone J2EE module (an EJB or Resource Adapter) that has been organized according to the J2EE specification and can be deployed to WebLogic Server. WebLogic Server also supports deploying Web Services modules, which are not part of the J2EE specification. Q.What is virtual Hosting? Ans: Virtual hosting, which defines a set of host names to which Oracle WebLogic Server instances (servers) or clusters respond. When you use virtual hosting, you use DNS to specify one or more host names that map to the IP address of a server or cluster. You also specify which Web applications are served by each virtual host. Q.What are security providers? Ans: Security providers, which are modular components that handle specific aspects of security, such as authentication and authorization. Q.What are Resource adapters? Ans: Resource adapters, which are system libraries specific to Enterprise Information Systems (EIS) and provide connectivity to an EIS. Q.What are Persistent Store? Ans: Persistent store, which is a physical repository for storing data, such as persist ent JMS messages. It can be either a JDBC-accessible database or a disk-based file Q.What are Startup classes? Ans: Startup Classes are Java programs that you create to provide custom, system-wide services for your applications. Q.What are work Managers? Ans: Work Managers, which determine how an application prioritizes the execution of its work based on rules you define and by monitoring actual run-time performance. You can create Work Mangers for entire Oracle WebLogic Server domains or for specific application components. Q.What happens when configuration file are deleted? Ans: We can configure Weblogic server to make back up copies of the configuration files. This helps in recovery when configuration needs to be reversed or in case configuration files are corrupted. When the admin server starts up, it saves a JAR file named config-booted.jar that contains the configuration file, the old ones are saved in the configArchive directory under the domain directory, in a jar named like config-1.jar. Q.How does Credentials are passed to Weblogic server? Ans: Credentials to the weblogic server can be passed in multiple ways as pass the credentails on the command line pass them to the weblogic server when it asks on the command prompt Create a boot.properties file and store the user C redentials in an encrypted format For WLST scripts that contain commands requiring a user name and password, create a user configuration file. This file, which you can create via the WLST storeUse rConfig command For we blogic.Deployer scripts containing commands requiring a user name and password, you can specify the user configuration file created via the WLST storeUse rConfig command instead of entering your unencrypted credentials. Q.What is an administration Mode in Production deployment? Ans: Distributing an application copies deployment files to target servers and places the application in a prepared state. You can then start the application in administration mode, which restricts access to the application to a configured administration channel so you can perform final testing without opening the application to external client connections or disrupting connected clients. You can start an application in administration mode with the - adminmode option Q.What are the available roles in weblogic? Ans: The built-in security roles for "Admin" and "Deployer" users allow you to perform deployment tasks using the WebLogic Server Administration Console. The "AppTester" security role allows you to test versions of applications that are deploy ed to administration mode. When deploying across WebLogic domains, the "CrossDomainConnector" role allows you to make inter-domain calls from foreign domains. Admin , View the server configuration, including the encrypted value of some encrypted attributes. Modify the entire server configuration Deploy Enterprise Applications and Web application, EJB, Java EE Connector, and Web Service modules and Start, resume, and stop servers. Operator , View the server configuration, exc ept for encrypted attributes and can Start, resume, and stop servers. Monitor ,View the server configuration, e xc ept for encrypted attributes .This security role effectively provides read-only access to the WebLogic Server Administration Console, WLST, and MBean APIs. Q.What is Denial-of-Service attack? Ans: A Denial-of-Service attack is a malicious attempt to overload a server with phony requests. One common type of attack is to send huge amounts of data in an HTTP POST method. You can set three attributes in Web Logic Server that help prevent this type of attack. These attributes are set in the Console, under Servers or Virtual Hosts Q.Can an application be deployed without any deployment descriptors? Ans: Yes an application can be deployed to J2ee complaint server with out any deployment descriptors by making use of the annotations or the container having reasonable defaults. The exploded archive can also be deployed with out any deployment descriptors Q.What is a fast swap deployment? Ans: fast swap deployment allows to deploy changes to the existing applications in a fast manner. Java EE 5 introduced the ability to redefine a class at run time without dropping its classloader or abandoning existing instances. This allowed containers to reload altered classes without disturbing running applications, vastly speeding up iterative development cycles and improving the overall development and testing experiences. Q.What is OPatch? Ans: OPatch is a Java-based utility that runs on all supported operating systems and requires installation of the Oracle Universal Installer. It is used to apply patches to Oracle software. This is used to patch not just weblogic but the whole Oracle fusion. OPatch offers many of the same features as Smart Update, but it has a different set of commands and command options. Q.What is a weblogic version compatibility? Ans: The version compatibility says that all weblogic servers in a domain must be the same versions of Weblogic version runnings. This means that in WebLogic Server 12.1.3, the Administration Server, Managed Servers, and the WebLogic domain must all be at version 12.1.3. Q.Can weblogic cluster be configured in a mixed platform? Ans: Yes , weblogic cluster can be configured in a mixed platform systems but it can cause negative impact on load balancing and performance. If you must operate a cluster on a mixed platform, Oracle strongly recommends that you understand the load balancing and performance implications. Q.Can a node manager run with a different version of weblogic version? Ans: Oracle recommends that the version of Node Manager used in a WebLogic domain should match the version of the Administration Server. Q.What is a WebLogic Diagnostics Framework? Ans: The WebLogic Diagnostics Framework (WLDF) is a monitoring and diagnostic framework that defines and implements a set of services that run within WebLogic Server processes and participate in the standard server life cycle. Using WLDF, you can create, collect, analyze, archive, and access diagnostic data generated by a running server and the applications deployed within its containers. This data provides insight into the run-time performance of servers and applications and enables you to isolate and diagnose faults when they occur. Q.What is a Harvester in WLDF? Ans: Harvester is used to Captures metrics from run-time MBeans, including WebLogic Server MBeans and custom MBeans, which can be archived and later accessed for viewing historical data Q.What are Watches and Notifications in WLDF? Ans: These Provides the means for monitoring server and application states and sending notifications based on criteria set in the watches. Q.What are the types of data sources provided in weblogic? Ans: There are 3 types of data sources provided in weblogic . they are Generic Data Sources—Generic data sources and their connection pools provide connection management processes that help keep your system running efficiently.You can set options in the data source to suit your applications and your environment. GridLink Data Sources—An event-based data source that adaptively responds to state changes in an Oracle RAC instance. Multi data sources—An abstraction around a group of generic data sources that provides load balancing or failover processing. Q.What are the types of Session Replication in weblogic ? Ans: Weblogic uses 2 types of Session replication methods, they are in- memory replication - Using in- memory replication, WebLogic Server copies a session state from one server instance to another. The primary server creates a primary session state on the server to which the client first connects, and a secondary replica on another WebLogic Server instance in the cluster. The replica is kept up-to-date so that it may be used if the server that hosts the servlet fails. JDBC-based persistence -In JDBC-based persistence, WebLogic Server maintains the HTTP session state of a servlet or JSP using file-based or JDBC-based persistence Q.Can we create a replica-aware stub? Ans: If you are using EJBs then just define the home-is-clusterable" or "stateless-bean-is-clusterable" in (weblogic -ejb-jar.xml) clusterable to *TRUE* by default those values are true.. Q.Is it possible in Weblogic to create readonly JDBC datasource? Ans: The datasource allows you to obtain pooled connection instances, each pooled connection instance representing a physical connection to a database that remains open during use by a series of logical connection instances. So, what you are allowed to do with a pooled connection instance strictly depends on the database permissions granted to the user used to create the physical connection. In other words, if you want a read only pool, use a user with restricted rights at the database level when creating your pool. Q.How WebLogic Server (10.3.2) initializes its Security Provider Database at startup? Ans: The security provider database should be initialized the first time security providers are used. (That is, before the security realm containing the security providers is set as the default (active) security realm.) This initialization can be done: When a WebLogic Server instance boots. When a call is made to one of the security provider's MBeans. Q.What is weblogic sever life cycle? Ans: There are 9 states of server:- Shutdown Starting Standby Resuming Running Suspending Shutting down Failed Unknown The series of states through which a WebLogic Server instance can transition is called the server life cycle. Q.What are Locks in weblogic? Ans: There are 4 types of loc k available in weblogic config.lok : This is used for the getting the file lock on the config.xml file.This lock ensures that the config.xml file is being owned by only one process at a time.This also ensures that the updates to config.xml file are done in a sequential order. Location : cfgdir/config/config.xml edit.lok : This was the most important lock that we see.This lock ensures that only one user is editing the configurations at any point of time. No 2 operations are performed at a same time Location : cfgdir/ embeddedLDAP.lok:This file locks access to the embedded ldap server to ensure that only one person has access to the directory server at any time. Location /cfgdir/servers//data/ldap/ldapfiles/ XXXServer.lok:This lock indicates that a given server is running.This ensures that the server is not started or running multiple times. Location : /cfgdir/servers//servername.lok When a webloigc server is stopped, the embeddedLDAP.lok and XXXservername.lok are deleted automatically. Q.How can I set deployment order for applications? Ans: WebLogic Server allows you to select the load order for applications. WebLogic Server deploys server-level resources (first JDBC and then JMS) before deploying applications. Applications are deployed in this order: connectors, then EJBs, then Web Applications. If the application is an EAR, the individual components are loaded in the order in which they are declared in the application.xml deployment descriptor. Q.Can I refresh static components of a deployed application without having to redeploy the entire application? Ans: Yes. You can use weblogic.Deployer to specify a component and target a server, using the following syntax: java weblogic.Deployer -adminurl http://admin:7001 -name appname - targets server1,server2 -deploy jsps/*.jsp Q.How do I turn the auto-deployment feature off? Ans: The auto-deployment feature checks the applications folder every three seconds to determine whether there are any new applications or any changes to existing applications and then dynamically deploys these changes. The auto-deployment feature is enabled for servers that run in development mode. To disable auto-deployment feature, use one of the following methods to place servers in production mode: In the Administration Console, click the name of the domain in the left pane, then select the Production Mode checkbox in the right pane. At the command line, include the following argument when starting the domain’s Administration Server: -Dweblogic.ProductionModeEnabled=t rue Production mode is set for all WebLogic Server instances in a given domain. Q.Can I enable requests to a JDBC connection pool for a database connection to wait until a connection is available? Ans: No, there's no way to allow a request to wait for a pool connection, and from the system point of view there should not be. Each requests that waits for a connection ties up one of the fixed number of execute threads in the server, which could otherwise be running another server task. Too many waiting requests could tie up all of the execute threads and freeze the server. Q.How many admin consoles possible in a single domain ? Ans: only one. Q.What is boot.properties file ? Ans: boot.properties is the file used by admin or managed server during startup for username and password. it exist under your domain/servers/server_name/security folder. Q.What are muxer threads? Ans: These are the Special Threads in Web logic Server to read incoming request from external entities on the servers. The main usage is to read the incoming request and then pass them to either “execute Thread” or “work Manager”. Weblogic allocates a percentage of the Thread pool for these pools.The default value is 33% and not more than 50%. Q.what is JRCMD? Ans: There is a tool called “jrc md” provided with JRocket jdk which sends commands to the JRocket jvm. This is a command line tool available in the JRcoket/bin/ Q.What are connection filters? Ans: Connection Filter is another feature provided by weblogic, which is a network layer security. These connection filters allow unwanted access to resources. For example these can be used in blocking a IP address in accessing the admin console of a weblogic Q.What is SNMP? Ans: The Simple Network Management Protocol (SNMP) is an application layer protocol that facilitates the exchange of management information between network devices and it is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP enables network administrators to manage network performance, solve and find network problems, and plan for network growth. Q.What is a light weight weblogic container? Ans: Normally when a web logic servers is started , the server type is wls . This server type basically starts all types of servers , but when we start the weblogic server with wlx options the server starts in a light weight mode. The "wlx" option starts a server instance that excludes the following services, making for a lighter weight runtime footprint: Enterprise JavaBeans (EJB) Java EE Connecter Architecture (JCA) Java Message Service (JMS) Just pass the “-DserverType=wlx “ to the startWeblogic.sh Script Q.How do you Bind an IP address to a weblogic server? Ans: There are cases where we need to run a web logic server by binding it to a specific IP address. This helps in case when we have multiple web logic server instances running on same machine which has multiple network interfaces. First, you should bind the address as follows: java -msXXm - mxXXm … -Dweblogic.system.bindAddr=xxx.xxx.xxx.xxx weblogic.Server Q.How do clean application Cache in Weblogic? Ans: Application cache is available in the Servers directory that are created when ever we start the Managed Servers. This directory contains one sub directory for each Oracle Web Logic Server instance in the domain. The sub directories contain data that is specific to each server instance. In Order to clean the application Cache , we can go to /servers//t mp/_WL_user/ In this location we will see all applications that were deployed to this server. We can delete the application for which we want to clean the cache. Before doing that please stop the server, delete the application cache and then restart the Server again. Q.How do we Clean the EJB Cache? Ans: For cleaning the EJB Cache just go to /domains/servers//cache/EJBCompilerCache. and remove the EJBCompilerCache and restart the Servers. The EJB files are again recompiled. This will be helpful when dealing with many EJB based applications Q.What is advantage if silent mode installation ? Ans: for this sailent mode of installation you need to specify the log file and xml file. The difference between command and sailent mode is command mode=console i.e here step by step are visible but in sailent mode every thing is configured. The biggest advantage of silent mode installation is that it is non-interactive and hence your intervention is not required during installation. All the parameters to be used during installation are defined in xml file (usually silent.xml) eg:./filename.bin - mode=silent -silent_xml=silent.xml Q.What does 'stub' mean in weblogic server? Ans: clients that connect to a WLS instances and look like a single clustered object obtain a replica-aware-stub of the object. The stub contains the list of all the available server instances of the object .It also has a load balancing logic to distribute the load across the multiple hosts. Q.What is multicast Address? Ans: An address that can be used to send the messages to the same host addresses but in different network addresses.that addresses can be referred as multicast address.The multicast addresses are in the range 224.0.0.0 to 239.255.255.255. Q.What is Garbage collection? Ans: Garbage collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim garbage, or memory occupied by objects that are no longer in use by the program. Q.What are the different types of WLST modes? Ans: There are Two types of connection modes: 1)Offline Mode: WLST helps you to create and extend a domain, and create domain templates. In offline mode, WLST acts as an interface to the Node Manager and you can issue WLST commands to start and stop Managed Server instances without connecting to the Admin Server. 2)Online Mode: WLST in online mode acts as a Java Management Extensions (JMX) client that manages the domain’s resources by modifying the server’s Configuration MBeans. Thus, WLST offers you all the domain management configuration capabilities as the Administration Console. Q.What are Heap, Core and Thread dumps? Ans: A heap dump is a snapshot of memory at a given point in time. It contains information on the Java objects and classes in memory at the time the snapshot was taken.'' A core dump is the printing or the copying to a more permanent medium (such as a hard disk ) the contents of random access memory ( RAM ) at one moment in time. One can think of it as a full-length "snapshot " of RAM. A core dump is taken mainly for the purpose of debugging a program. A thread dump is a list of all the Java threads that are currently active in a Java Virtual Machine (JVM). Q.How to check the Weblogic Version? Ans: We can use the registery.xml file contact for more on Oracle Weblogic Online Training
Continue reading
React JS Interview Questions
What Is Reactjs? Ans: React is an open source JavaScript front end UI library developed by Facebook for creating interactive, stateful & reusable UI components for web and mobile app. It is used by Facebook, Instagram and many more web apps. ReactJS is used for handling view layer for web and mobile applications. One of React’s unique major points is that it perform not only on the client side, but also can be rendered on server side, and they can work together inter-operably. Why Reactjs Is Used? Ans: React is used to handle the view part of Mobile application and Web application. Does Reactjs Use Html? Ans: No, It uses JSX which is simiar to HTM. When Reactjs Released? Ans: March 2013 What Is Current Stable Version Of Reactjs? Ans: Version: 15.5 Release on: April 7, 2017 What Are The Life Cycle Of Reactjs? Ans: Initialization State/Property Updates Destruction What Are The Feature Of Reactjs? Ans: JSX: JSX is JavaScript syntax extension. Components : React is all about components. One direction flow: React implements one way data flow which makes it easy to reason about your app What Are The Advantages Of Reactjs? Ans: React uses virtual DOM which is JavaScript object. This will improve apps performance It can be used on client and server side Component and Data patterns improve readability. Can be used with other framework also. How To Embed Two Components In One Component? Ans: import React from 'react'; class App extends React.Component { render() { return ( ); } } class Header extends React.Component { render() { return ( Header ); What Are The Advantages Of Using Reactjs? Ans: Advantages of ReactJS: React uses virtual DOM which is JavaScript object. This improves application performance as JavaScript virtual DOM is faster than the regular DOM. React can be used on client and as well as server side too. Using React increases readability and makes maintainability easier. Component, Data patterns improves readability and thus makes it easier for manitaing larger apps. React can be used with any other framework (Backbone.js, Angular.js) as it is only a view layer. React’s JSX makes it easier to read the code of our component. It’s really very easy to see the layout. How components are interacting, plugged and combined with each other in app. What Are The Limitations Of Reactjs? Ans: Limitations of ReactJS: React is only for view layer of the app so we still need the help of other technologies to get a complete tooling set for development. React is using inline templating and JSX. This can seem awkward to some developers. The library of react is too large. Learning curve for ReactJS may be steep. How To Use Forms In Reactjs? Ans: In React’s virtual DOM, HTML Input element presents an interesting problem. With the others DOM environment, we can render the input or textarea and thus allows the browser maintain its state that is (its value). we can then get and set the value implicitly with the DOM API. In HTML, form elements such as , , and itself maintain their own state and update its state based on the input provided by user .In React, components’ mutable state is handled by the state property and is only updated by setState(). HTML and components use the value attribute. HTML checkbox and radio components, checked attribute is used. (within ) components, selected attribute is used for select box. How To Use Events In Reactjs? Ans: React identifies every events so that it must have common and consistent behavior across all the browsers. Normally, in normal JavaScript or other frameworks, the onchange event is triggered after we have typed something into a Textfield and then “exited out of it”. In ReactJS we cannot do it in this way. The explanation is typical and non-trivial: *” renders an input textbox initialized with the value, “dataValue”. When the user changes the input in text field, the node’s value property will update and change. However, node.getAttribute(‘value’) will still return the value used at initialization time that is dataValue. Form Events: onChange: onChange event watches input changes and update state accordingly. onInput: It is triggered on input data onSubmit: It is triggered on submit button. Mouse Events: onClick: OnClick of any components event is triggered on. onDoubleClick: onDoubleClick of any components event is triggered on. onMouseMove: onMouseMove of any components, panel event is triggered on. onMouseOver: onMouseOver of any components, panel, divs event is triggered on. Touch Events: onTouchCancel: This event is for canceling an events. onTouchEnd: Time Duration attached to touch of a screen. onTouchMove: Move during touch device . onTouchStart: On touching a device event is generated. Give An Example Of Using Events? Ans: import React from 'react'; import ReactDOM from 'react-dom'; var StepCounter = React.createClass({ getInitialState: function() { return {counter: this.props.initialCounter }; }, handleClick: function() { this.setState({counter: this.state.counter + 1}); }, render: function() { return OnClick Event, Click Here: {this.state.counter }; } }); ReactDOM.render(< StepCounter initialCounter={7}/>, document.getElementById('content')); Explain Various Flux Elements Including Action, Dispatcher, Store And View? Ans: Flux can be better explained by defining its individual components: Actions– They are helper methods that facilitate passing data to the Dispatcher. Dispatcher– It is Central hub of app, it receives actions and broadcasts payloads to registered callbacks. Stores– It is said to be Containers for application state & logic that have callbacks registered to the dispatcher. Every store maintains particular state and it will update when it is needed. It wakes up on a relevant dispatch to retrieve the requested data. It is accomplished by registering with the dispatcher when constructed. They are similar to model in a traditional MVC (Model View Controller), but they manage the state of many objects — it does not represent a single record of data like ORM models do. Controller Views– React Components grabs the state from Stores and pass it down through props to child components to view to render application. What Is Flux Concept In Reactjs? Ans: Flux is the architecture of an application that Facebook uses for developing client-side web applications. Facebook uses internally when working with React. It is not a framework or a library. This is simply a new technique that complements React and the concept of Unidirectional Data Flow. Facebook dispatcher library is a sort of global pub/sub handler technique which broadcasts payloads to registered callbacks. Give An Example Of Both Stateless And Stateful Components With Source Code? Ans: Stateless and Stateful components Stateless: When a component is “stateless”, it calculates state is calculated internally but it directly never mutates it. With the same inputs, it will always produce the same output. It means it has no knowledge of the past, current or future state changes. var React = require('react'); var Header = React.createClass({ render: function() { return( ); } }); ReactDOM.render(, document.body); Stateful : When a component is “stateful”, it is a central point that stores every information in memory about the app/component’s state, do has the ability to change it. It has knowledge of past, current and potential future state changes. Stateful component change the state, using this.setState method. var React = require('react'); var Header = React.createClass({ getInitialState: function() { return { imageSource: "header.png" }; }, changeImage: function() { this.setState({imageSource: "changeheader.png"}); }, render: function() { return( ); } }); module.exports = Header; Explain Basic Code Snippet Of Jsx With The Help Of A Practical Example? Ans: Your browsers does not understand JSX code natively, we need to convert it to JavaScript first which can be understand by our browsers. We have aplugin which handles including Babel 5’s in-browser ES6 and JSX transformer called browser.js. Babel will understand and recognize JSX code in tags and transform/convert it to normal JavaScript code. In case of production we will need to pre-compile our JSX code into JS before deploying to production environment so that our app renders faster. My First React JSX Example var HelloWorld = React.createClass({ render: function() { return ( Hello, World ) } }); ReactDOM.render( , document.getElementById('hello-world')); What Are The Advantages Of Using Jsx? Ans: JSX is completely optional and its not mandatory, we don’t need to use it in order to use React, but it has several advantages and a lot of nice features in JSX. JSX is always faster as it performs optimization while compiling code to vanilla JavaScript. JSX is also type-safe, means it is strictly typed and most of the errors can be caught during compilation of the JSX code to JavaScript. JSX always makes it easier and faster to write templates if we are familiar with HTML syntax. What Is Reactjs-jsx? Ans: JSX (JavaScript XML), lets us to build DOM nodes with HTML-like syntax. JSX is a preprocessor step which adds XML syntax to JavaScript. Like XML, JSX tags have a tag name, attributes, and children JSX also has the same. If an attribute/property value is enclosed in quotes(“”), the value is said to be string. Otherwise, wrap the value in braces and the value is the enclosed JavaScript expression. We can represent JSX as . What Are Components In Reactjs? Ans: React encourages the idea of reusable components. They are widgets or other parts of a layout (a form, a button, or anything that can be marked up using HTML) that you can reuse multiple times in your web application. ReactJS enables us to create components by invoking the React.createClass() method features a render() method which is responsible for displaying the HTML code. When designing interfaces, we have to break down the individual design elements (buttons, form fields, layout components, etc.) into reusable components with well-defined interfaces. That way, the next time we need to build some UI, we can write much less code. This means faster development time, fewer bugs, and fewer bytes down the wire. How To Apply Validation On Props In Reactjs? Ans: When the application is running in development mode, React will automatically check for all props that we set on components to make sure they must right correct and right data type. For instance, if we say a component has a Message prop which is a string and is required, React will automatically check and warn if it gets invalid string or number or boolean objects. For performance reasons this check is only done on dev environments and on production it is disabled so that rendering of objects is done in fast manner . Warning messages are generated easily using a set of predefined options such as: PropTypes.string PropTypes.number PropTypes.func PropTypes.node PropTypes.bool What Are State And Props In Reactjs? Ans: State is the place where the data comes from. We must follow approach to make our state as simple as possible and minimize number of stateful components. For example, ten components that need data from the state, we should create one container component that will keep the state for all of them. The state starts with a default value and when a Component mounts and then suffers from mutations in time (basically generated from user events). A Component manages its own state internally, but—besides setting an initial state—has no business fiddling with the stateof its children. You could say the state is private. import React from 'react'; import ReactDOM from 'react-dom'; var StepCounter = React.createClass({ getInitialState: function() { return {counter: this.props.initialCount}; }, handleClick: function() { this.setState({counter: this.state. counter + 1}); }, render: function() { return {this.state.counter }; } }); ReactDOM.render(< StepCounter initialCount={7}/>, document.getElementById('content')); Props: They are immutable, this is why container component should define state that can be updated and changed. It is used to pass data down from our view-controller(our top level component). When we need immutable data in our component we can just add props to reactDOM.render() function. import React from 'react'; import ReactDOM from 'react-dom'; class PropsApp extends React.Component { render() { return ( {this.props.headerProperty} {this.props.contentProperty} ); } } ReactDOM.render(, document.getElementById('app')); } What Is The Difference Between The State And Props In Reactjs? Ans: Props: Passes in from parent component.This properties are being read by PropsApp component and sent to ReactDOM View. State: Created inside component by getInitialState.this.state reads the property of component and update its value it by this.setState() method and then returns to ReactDOM view.State is private within the component. What Are The Benefits Of Redux? Ans: Maintainability: Maintenance of Redux becomes easier due to strict code structure and organisation. Organization: Code organisation is very strict hence the stability of the code is high which intern increases the work to be much easier. Server rendering: This is useful, particularly to the preliminary render, which keeps up a better user experience or search engine optimization. The server-side created stores are forwarded to the client side. Developer tools: It is Highly traceable so changes in position and changes in the application all such instances make the developers have a real-time experience. Ease of testing: The first rule of writing testable code is to write small functions that do only one thing and that are independent. Redux’s code is made of functions that used to be: small, pure and isolated. How Distinct From Mvc And Flux? Ans: As far as MVC structure is concerned the data, presentation and logical layers are well separated and handled. here change to an application even at a smaller position may involve a lot of changes through the application. this happens because data flow exists bidirectional as far as MVC is concerned. Maintenance of MVC structures are hardly complex and Debugging also expects a lot of experience for it. Flux stands closely related to redux. A story based strategy allows capturing the changes applied to the application state, the event subscription, and the current state are connected by means of components. Call back payloads are broadcasted by means of Redux. What Are Functional Programming Concepts? Ans: The various functional programming concepts used to structure Redux are listed below: Functions are treated as First class objects. Capable to pass functions in the format of arguments. Capable to control flow using, recursions, functions and arrays. helper functions such as reduce and map filter are used. allows linking functions together. The state doesn’t change. Prioritize the order of executing the code is not really necessary. What Is Redux Change Of State? Ans: For a release of an action, a change in state to an application is applied, this ensures an intent to change the state will be achieved. Example: The user clicks a button in the application. A function is called in the form of component So now an action gets dispatched by the relative container. This happens because the prop (which was just called in the container) is tied to an action dispatcher using mapDispatchToProps (in the container). Reducer on capturing the action it intern executes a function and this function returns a new state with specific changes. The state change is known by the container and modifies a specific prop in the component as a result of the mapStateToProps function. Where Can Redux Be Used? Ans: Redux is majorly used is a combination with reacting. it also has the ability to get used with other view libraries too. some of the famous entities like AngularJS, Vue.js, and Meteor. can get combined with Redux easily. This is a key reason for the popularity of Redux in its ecosystem. So many articles, tutorials, middleware, tools, and boilerplates are available. What Is The Typical Flow Of Data In A React + Redux App? Ans: Call-back from UI component dispatches an action with a payload, these dispatched actions are intercepted and received by the reducers. this interception will generate a new application state. from here the actions will be propagated down through a hierarchy of components from Redux store. The below diagram depicts the entity structure of a redux+react setup. What Is Store In Redux? Ans: The store holds the application state and supplies the helper methods for accessing the state are register listeners and dispatch actions. There is only one Store while using Redux. The store is configured via the create Store function. The single store represents the entire state.Reducers return a state via action export function configureStore(initialState) { return createStore(rootReducer, initialState); } The root reducer is a collection of all reducers in the application. const root Reducer = combineReducers({ donors: donor Reducer, }); Explain Reducers In Redux? Ans: The state of a store is updated by means of reducer functions. A stable collection of a reducers form a store and each of the stores maintains a separate state associated for itself. To update the array of donors, we should define donor application Reducer as follows. export default function donorReducer(state = , action) { switch (action.type) { case actionTypes.addDonor: return ; default: return state; } } The initial state and action are received by the reducers. Based on the action type, it returns a new state for the store. The state maintained by reducers are immutable. The below-given reducer it holds the current state and action as an argument for it and then returns the next state:function handelingAuthentication(st, actn) { return _.assign({}, st, { auth: actn.pyload }); } What Are Redux Workflow Features? Ans: Reset: Allow to reset the state of the store Revert: Roll back to the last committed state Sweep: All disabled actions that you might have fired by mistake will be removed Commit: It makes the current state the initial state Explain Action’s In Redux? Ans: Actions in Redux are functions which return an action object. The action type and the action data are packed in the action object. which also allows a donor to be added to the system. Actions send data between the store and application. All information’s retrieved by the store are produced by the actions. export function addDonorAction(donor) { return { type: actionTypes.add Donor, donor, }; } Internal Actions are built on top of Javascript objects and associate a type property to it. Click here to add your own text
Continue reading
Ruby On Rails Interview Questions
Q.What Is Rails? Ans: Rails is a extremely productive web-application framework written in Ruby language by David Hansson. Rails are an open source Ruby framework for developing database-backend web applications. Rails include everything needed to create a database-driven web application using the Model-View-Controller (MVC) pattern. Q.What Are The Various Components Of Rail? Ans: Action Pack: Action Pack is a single gem that contains Action Controller, Action View and Action Dispatch. The “VC” part of “MVC”. Action Controller: Action Controller is the component that manages the controllers in a Rails application. The Action Controller framework processes incoming requests to a Rails application, extracts parameters, and dispatches them to the intended action. Services provided by Action Controller include session management, template rendering, and redirect management. Action View: Action View manages the views of your Rails application. It can create both HTML and XML output by default. Action View manages rendering templates, including nested and partial templates, and includes built-in AJAX support. Action Dispatch: Action Dispatch handles routing of web requests and dispatches them as you want, either to your application or any other Rack application. Rack applications are a more advanced topic and are covered in a separate guide called Rails on Rack. Action Mailer: Action Mailer is a framework for building e-mail services. You can use Action Mailer to receive and process incoming email and send simple plain text or complex multipart emails based on flexible templates. Active Model: Active Model provides a defined interface between the Action Pack gem services and Object Relationship Mapping gems such as Active Record. Active Model allows Rails to utilize other ORM frameworks in place of Active Record if your application needs this. Active Record: Active Record are like Object Relational Mapping (ORM), where classes are mapped to table, objects are mapped to columns and object attributes are mapped to data in the table. Active Resource: Active Resource provides a framework for managing the connection between business objects and RESTful web services. It implements a way to map web-based resources to local objects with CRUD semantics. Active Support: Active Support is an extensive collection of utility classes and standard Ruby library extensions that are used in Rails, both by the core code and by your applications. Q.Explain About Restful Architecture? Ans: RESTful: REST stands for Representational State Transfer. REST is an architecture for designing both web applications and application programming interfaces (API’s), that’s uses HTTP. RESTful interface means clean URLs, less code, CRUD interface. CRUD means Create-READ-UPDATE-DESTROY. In REST, they add 2 new verbs, i.e, PUT, DELETE. Q.Why Ruby On Rails? Ans: There are lot of advantages of using ruby on rails. DRY Principal( Don’t Repeat Yourself): It is a principle of software development aimed at reducing repetition of code. “Every piece of code must have a single, unambiguous representation within a system” Convention over Configuration: Most web development framework for .NET or Java force you to write pages of configuration code. If you follow suggested naming conventions, Rails doesn’t need much configuration. Gems and Plugins: RubyGems is a package manager for the Ruby programming language that provides a standard format for distributing ruby programs and library. Plugins: A Rails plugin is either an extension or a modification of the core framework. It provides a way for developers to share bleeding-edge ideas without hurting the stable code base. We need to decide if our plugin will be potentially shared across different Rails applications. Scaffolding: Scaffolding is a meta-programming method of building database-backend software application. It is a technique supported by MVC frameworks, in which programmer may write a specification, that describes how the application database may be used. There are two type of scaffolding: -static: Static scaffolding takes 2 parameter i.e your controller name and model name. -dynamic: In dynamic scaffolding you have to define controller and model one by one. Rack Support: Rake is a software task management tool. It allows you to specify tasks and describe dependencies as well as to group tasks in a namespace. Metaprogramming: Metaprogramming techniques use programs to write programs. Bundler: Bundler is a new concept introduced in Rails 3, which helps you to manage your gems for application. After specifying gem file, you need to do a bundle install. Rest Support. Action Mailer Q.What Do You Mean By Render And Redirect_to? Ans: render causes rails to generate a response whose content is provided by rendering one of your templates. Means, it will direct goes to view page. redirect_to generates a response that, instead of delivering content to the browser, just tells it to request another url. Means it first checks actions in controller and then goes to view page. Q.What Is Orm In Rails? Ans: ORM tends for Object-Relationship-Model, where Classes are mapped to table in the database, and Objects are directly mapped to the rows in the table. Q.How Many Types Of Associations Relationships Does A Model Have? Ans: When you have more than one model in your rails application, you would need to create connection between those models. You can do this via associations. Active Record supports three types of associations: one-to-one: A one-to-one relationship exists when one item has exactly one of another item. For example, a person has exactly one birthday or a dog has exactly one owner. one-to-many: A one-to-many relationship exists when a single object can be a member of many other objects. For instance, one subject can have many books. many-to-many: A many-to-many relationship exists when the first object is related to one or more of a second object, and the second object is related to one or many of the first object. You indicate these associations by adding declarations to your models: has_one, has_many, belongs_to, and has_and_belongs_to_many. Q.What Are Helpers And How To Use Helpers In Ror? Ans: Helpers are modules that provide methods which are automatically usable in your view. They provide shortcuts to commonly used display code and a way for you to keep the programming out of your views. The purpose of a helper is to simplify the view. Q.What Are Filters? Ans: Filters are methods that run “before”, “after” or “around” a controller action. Filters are inherited, so if you set a filter on ApplicationController, it will be run on every controller in your application. Q.What Is Mvc? And How It Works? Ans: MVC tends for Model-View-Controller, used by many languages like PHP, Perl, Python etc. The flow goes like this: Request first comes to the controller, controller finds and appropriate view and interacts with model, model interacts with your database and send the response to controller then controller based on the response give the output parameter to view. Q.What Is Session And Cookies? Ans: Session is used to store user information on the server side. Maximum size is 4 kb. Cookies are used to store information on the browser side or we can say client side. Q.What Is Request.xhr? Ans: A request.xhr tells the controller that the new Ajax request has come, It always return Boolean values (TRUE or FALSE) Q.What Things We Can Define In The Model? Ans: There are lot of things you can define in models few are: Validations (like validates_presence_of, numeracility_of, format_of etc.) Relationships (like has_one, has_many, HABTM etc.) Callbacks (like before_save, after_save, before_create etc.) Suppose you installed a plugin say validation_group, So you can also define validation_group settings in your model ROR Queries in Sql Active record Associations Relationship Q.How Many Types Of Callbacks Available In Ror? Ans: before_validation before_validation_on_create validate_on_create after_validation after_validation_on_create before_save before_create after_create after_save Q.How To Serialize Data With Yaml? Ans: YAML is a straight forward machine parsable data serialization format, designed for human readability and interaction with scripting language such as Perl and Python. YAML is optimized for data serialization, formatted dumping, configuration files, log files, internet messaging and filtering. Q.How To Use Two Databases Into A Single Application? Ans: magic multi-connections allows you to write your model once, and use them for the multiple rails databases at the same time. sudo gem install magic_multi_connection. After installing this gem, just add this line at bottom of your environment.rb require “magic_multi_connection” Q.What Are The Various Changes Between The Rails Version 2 And 3? Ans: Introduction of bundler (new way to manage your gem dependencies) Gemfile and Gemfile.lock (where all your gem dependencies lies, instead of environment.rb) HTML5 support Q.What Is Tdd And Bdd? Ans: TDD stands for Test-Driven-Development and BDD stands for Behavior-Driven-Development. Q.What Are The Servers Supported By Ruby On Rails? Ans: RoR was generally preferred over WEBrick server at the time of writing, but it can also be run by: Lighttpd (pronounced ‘lighty’) is an open-source web server more optimized for speed-critical environments. Abyss Web Server- is a compact web server available for windows, Mac osX and Linux operating system. Apache and nginx Q.What Do You Mean By Naming Convention In Rails. Ans: Variables: Variables are named where all letters are lowercase and words are separated by underscores. E.g: total, order_amount. Class and Module: Classes and modules uses MixedCase and have no underscores, each word starts with a uppercase letter. Eg: InvoiceItem Database Table: Table name have all lowercase letters and underscores between words, also all table names to be plural. Eg: invoice_items, orders etc Model: The model is named using the class naming convention of unbroken MixedCase and always the singular of the table name. For eg: table name is might be orders, the model name would be Order. Rails will then look for the class definition in a file called order.rb in /app/model directory. If the model class name has multiple capitalized words, the table name is assumed to have underscores between these words. Controller: controller class names are pluralized, such that Orders Controller would be the controller class for the orders table. Rails will then look for the class definition in a file called orders_controlles.rb in the /app/controller directory. Q.What Is The Log That Has To Seen To Check For An Error In Ruby Rails? Ans: Rails will report errors from Apache in log/apache.log and errors from the ruby code in log/development.log. If you having a problem, do have a look at what these log are saying. Q.How You Run Your Rails Application Without Creating Databases? Ans: You can run your application by uncommenting the line in environment.rb path=> rootpath conf/environment.rb config.frameworks-=) @image.save render :layout => false end Q.What Are The Different Components Of Rails ? Ans: The components used in Rails are as follows: Action Controller: it is the component that manages all other controllers and process the incoming request to the Rails application. It extracts the parameters and dispatches the response when an action is performed on the application. It provides services like session management, template rendering and redirect management. Action View: it manages the views of the Rails application and it creates the output in both HTML and XML format. It also provides the management of the templates and gives the AJAX support that is being used with the application. Active Record: It provides the base platform for the models and gets used in the Rails application. It provides the database independence, CRUID functionality, search capability and setting the relationship between different models. Action Mailer: It is a framework that provides email services to build the platform on which flexible templates can be implemented. Q.What Is The Purpose Of Load, Auto_load, And Require_relative In Ruby ? Ans: Load allows the process or a method to be loaded in the memory and it actually processes the execution of the program used in a separate file. It includes the classes, modules, methods and other files that executes in the current scope that is being defined. It performs the inclusion operation and reprocesses the whole code every time the load is being called. require is same as load but it loads code only once on first time. Auto_load: this initiates the method that is in hat file and allows the interpreter to call the method. require_relative: allows the loading to take place of the local folders and files. Q.What Is A Proc ? Ans: Everyone usually confuses procs with blocks, but the strongest rubyist can grok the true meaning of the question. Essentially, Procs are anonymous methods (or nameless functions) containing code. They can be placed inside a variable and passed around like any other object or scalar value. They are created by Proc.new, lambda, and blocks (invoked by the yield keyword). Blocks are very handy and syntactically simple, however we may want to have many different blocks at our disposal and use them multiple times. As such, passing the same block again and again would require us to repeat ourself. However, as Ruby is fully object-oriented, this can be handled quite cleanly by saving reusable code as an object itself. This reusable code is called aProc (short for procedure). The only difference between blocks and Procs is that a block is a Proc that cannot be saved, and as such, is a one time use solution. Q.What Is Unit Testing (in Classical Terms)? What Is The Primary Technique When Writing A Test ? Ans: Unit testing, simply put, is testing methods -- the smallest unit in object-oriented programming. Strong candidates will argue that it allows a developer to flesh out their API before it's consumed by other systems in the application. The primary way to achieve this is to assert that the actual result of the method matches an expected result. Q.What Is The Difference Between Nil And False In Ruby ? Ans: False is a boolean datatype, Nil is not a data type it have object_id 4. Q.What Are The Looping Structures Available In Ruby ? Ans: for..in untill..end while..end do..end Note: You can also use each to iterate a array as loop not exactly like loop Q.How Is Visibility Of Methods Changed In Ruby (encapsulation) ? Ans: By applying the access modifier : Public , Private and Protected access Modifier Q.Dynamic Finders ? Ans: For every field (also known as an attribute) you define in your table, Active Record provides a finder method. If you have a field called first_name on your Client model for example, you getfind_by_first_name and find_all_by_first_name for free from Active Record. If you have a locked field on the Client model, you also get find_by_locked and find_all_by_lockedmethods. You can also use find_last_by_* methods which will find the last record matching your argument. You can specify an exclamation point (!) on the end of the dynamic finders to get them to raise an ActiveRecord::RecordNotFound error if they do not return any records, like Client.find_by_name!("Ryan") If you want to find both by name and locked, you can chain these finders together by simply typing "and" between the fields. For example, Client.find_by_first_name_and_locked("Ryan", true). Q.Finding By Sql ? Ans: If you'd like to use your own SQL to find records in a table you can use find_by_sql. The find_by_sql method will return an array of objects even if the underlying query returns just a single record. For example you could run this query: Client.find_by_sql("SELECT * FROM clients INNER JOIN orders ON clients.id = orders.client_id ORDER clients.created_at desc") find_by_sql provides you with a simple way of making custom calls to the database and retrieving instantiated objects. Q.Pluck ? Ans: pluck can be used to query a single column from the underlying table of a model. It accepts a column name as argument and returns an array of values of the specified column with the corresponding data type. Client.where(:active => true).pluck(:id) # SELECT id FROM clients WHERE active = 1 Client.uniq.pluck(:role) # SELECT DISTINCT role FROM clients Q.Difference Between Rails 2 And Rails 3 ? Ans: I found 7 major Difference between rails 2 and rails 3 has: New Router Api, New mailer, New Active Record Query interface, Assets pipeline, Security Improvements, Unobtrusive JavaScript (UJS) , Dependency management with bundler. Q.What Is A Ruby Singleton Method ? Ans: A method which belongs to a single object rather than to an entire class and other objects. Before explaining about singleton methods I would like to give a small introduction about class methods. Class method: When you write your own class methods you do so by prefacing the method name with the name of the class. There are three ways to write a class method. The first way is to preface the class name with the method name(ClassMethods.method1). The second way is to preface the self keyword with the method name(self.method2). The third way is writing a sepetare class inside the class which contains the methods (class
Continue reading
SCCM Interview Questions
SCCM Interview Questions and Answers Q.What is SCCM ? Ans: System Center Configuration Manager (CM16 or CM12 or ConfigMgr or Configuration Manager), formerly Systems Management Server (SMS), is a systems management software product by Microsoft for managing large groups of Windows-based computer systems. Configuration Manager provides remote control, patch management, software distribution, operating system deployment, network access protection, and hardware and software inventory. Q.What is SMS provider and what it does ? Ans: The SMS Provider is a WMI provider that allows both read and write access to the Configuration Manager 2016 site database. The SMS Provider is used by the Configuration Manager console, Resource Explorer, tools, and custom scripts used by Configuration Manager 2016 administrators to access site information stored in the site database. The SMS Provider also helps ensure that Configuration Manager 2016 object security is enforced by only returning site information that the user account running the Configuration Manager console is authorized to view. Q.What is PRIMARY SITE ? Ans: Manages clients in well-connected networks. Four main characteristics: The Site has access to a Microsoft SQL Server Database. Can administer or be administered via the Configuration Manager Console It can be a child of other Primary Sites and can have Child Sites of its own Clients can be assigned directly to the Site Q.What is CENTRAL SITE ? Ans: A Central Site is a Configuration Manager Primary Site that resides at the top of the Configuration Manager hierarchy. All Database information rolls from the child to the parent and is collected by the Central Site’s Configuration Manager Database. The Central Site can administer any site below it in the hierarchy and can send data down to those sites as well. Q.What is Secondary Site? Ans: Four Main characteristics: - A Secondary Site does not have access to a Microsoft SQL Database - Secondary Sites are ALWAYS a Child Site of a Primary Site and can only be administered via a Primary Site - Secondary Sites cannot have Child Sites of their own - Clients cannot be assigned directly to the Site Q.Can you change a secondary site to a primary site? Ans: No. A secondary site is always a secondary site. It cannot be upgraded, moved, or changed without deleting it and reinstalling it. If you delete and reinstall, you lose all secondary site data. Q. How SCCM download the patches ? Ans: You need to add the Software Update Point site role to the site, configure the software update point as active, configure the products, classifications, sync settings, etc. in the Software Update Point properties. THEN, you can go to the Update Repository node and run the Run Synchronization action from the central primary site. Once synchronization completes, you will see the metadata in the Configuration Manager console. Q.Can you distribute a package to a computer without making it a member of a collection? Ans: No. To distribute software you must have a package, a program and an advertisement. Advertisements can only be sent to collections, not to computers. If you want to distribute a package to a single computer, you must create a collection for that computer. .Q. Can a site have more than one default management point? Ans: No. You can configure more than one management points in a site, but only one of those management points can be configured as the default management point to support intranet clients in the site. If you are concerned about performance, you can configure more than one management point, configure them to be part of a Network Load Balancing (NLB) cluster, and them configure the NLB cluster as the default management point. .Q.Can a secondary site have child sites? Ans: No. A secondary site cannot have a primary or secondary site reporting to it. Secondary sites are always child sites to a primary site. Q.Can you install the Configuration Manager client components without discovering the computer first? Ans: Yes. Client Push Installation is the only client installation method that requires clients to be discovered first. Q.Does Configuration Manager 2016 mixed mode require a public key infrastructre (PKI)? Ans: No. Configuration Manager 2016 native mode requires a PKI, but Configuration Manager 2016 does not. PKI authentication helps provide a greater level of security, but Configuration Manager 2016 does not help you install or configure the PKI infrastructure. If you do not already have the expertise to install and configure the PKI infrastructure, you can start with mixed mode and then change to native mode later. Q.Can computers show up in the Configuration Manager console before they have the Configuration Manager client installed? Ans: Yes. If you use a discovery method, Configuration Manager can find many resources and create data discovery records (DDRs) for them, and those DDRs are stored in the database. However, you cannot use Configuration Manager features such as software distribution, software updates management, and inventory until you install the client components. Q.How do you Backup SCCM Server ? Ans: To create a scheduled backup task, expand the Site Settings node and expand the Site Maintenance node, click on Tasks. For Manual backup - Start SMS_SITE_BACKUP service Q.What are the client deployments methods ? Ans: Client Push Installion, Software update point based installation, Group Policy Installation, Logon Script Installation, Manual Installation, Upgrade Installation(software Distribution). .Q.What is SUP ( Software Update Point) ? Ans: This is required component of software updates, and after it is installed, the SUP is displayed as a site system role in the Configuration Manager console. The software update point site system role must be created on a site system server that has Windows Server Update Services (WSUS) 3.0 installed. Q.What is ITMU ? Ans: SMS 2003 Inventory Tool for Microsoft Updates. Q.What are the prerequisite for Software Update Point ? – Ans: -Windows Server Update Services (WSUS) 3.0 -WSUS 3.0 Administration Console - Windows Update Agent (WUA) 3.0 - Site server communication to the active software update point - Network Load Balancing (NLB) - Background Intelligent Transfer Server (BITS) 2.5 - Windows Installer Q.What is SMS Provider ? Ans: The SMS Provider is a WMI provider that allows both read and write access to the Configuration Manager 2016 site database. The SMS Provider is used by the Configuration Manager console. The SMS Provider can be installed on the site database server computer, site server computer or another server class third computer during Configuration Manager 2016 Setup. After setup has completed, the current installed location of the SMS Provider is displayed on the site properties general tab. .Q.Can you assign clients to a secondary site? Ans: No.If you have a secondary site, the client must be assigned to the primary parent of the secondary site. However, Configuration Manager knows how to manage clients at the child secondary site. If there is a distribution point at the secondary site that has the content the clients need, the clients will probably get the content from the local distribution point instead of crossing the WAN link to the primary site. .Q.Can Configuration Manager 2016 be used to package software for distribution? Ans: No. Configuration Manager 2016 delivers command lines to clients and can force those command lines to run with administrative rights using the Local System account. Configuration Manager 2016 command lines can be batch files, scripts, Windows Installer files with .msi extensions, executable files – any file that the operating system can run, Configuration Manager 2016 can distribute. However, Configuration Manager 2016 does not actually package any software for distribution. contact for more on SCCM Online Training
Continue reading
Windows Admin Interview Questions
Q.What is the purpose of having AD? Answer: Active directory is a directory service that identifies all resources on a network and makes that information available to users and services. The Main purpose of AD is to control and authenticate network resources. Q.Explain about sysvol folder? Answer: The sysvol folder stores the server's copy of the domain's public files. The contents such as group policy, users, and groups of the sysvol folder are replicated to all domain controllers in the domain. The sysvol folder must be located on an NTFS volume. Q.Differentiate between NTFS & FAT. Answer: NTFS is the current file system used by Windows. It offers features like security permissions (to limit other users' access to folders), quotas (so one user can't fill up the disk), shadowing (backing up) and many other features that help Windows. FAT32 is the older Microsoft filesystem, primarily used by the Windows 9X line and Window could be installed on a FAT32 parition up to XP. In comparision, FAT32 offers none of what was mentioned above, and also has a maximum FILE (not folder) size of 4GB, which is kind of small these days, especially in regards to HD video. Q.Explain Functions of Active Directory? Answer: AD enables centralization in a domain environment. The Main purpose of AD is to control and authenticate network resources. Q.What is the name of AD database? Answer: AD database is NTDS.DIT Q.What is loop back. Answer: Loopback address is 127.0.0.1, An address that sends outgoing signals back to the same computer for testing. Q.What is Proxy Server. Answer: A proxy server is a computer that acts as a gateway between a local network (e.g., all the computers at one company or in one building) and a larger-scale network such as the Internet. Proxy servers provide increased performance and security. In some cases, they monitor employees' use of outside resources. Q.Differentiate between FIREWALL/ANTIVIRUS. Answer: Antivirus: The prime job of an anivirus is protect your system from computer viruses. Your computer may be standalone or part of network or connected to Internet you need an antivirus program. It actively monitors when you are using your system for any virus threat from different sources. if it found one it tries to clean or quarantine the virus ultimately keeping your system and data safe. Firewall: Firewall is in other hand a program which protects your system from outsider/intruder/hacker attacks. These attacks may not be virus type. In some cases hackers can take control of your system remotely and steal your data or important information from system. If your system is directly connected to internet or a large network than you can install a software firewall in your PC to protect your self from unauthorized access. Firewall is available either in software or in hardware form. For a single PC you may need a software firewall while a large corporate implements hardware firewall to protect all of their systems from such attacks. Q.Differentiate between Frond end & Back End Server. Backend server? Answer: A back end server is a computer resource that has not been exposed to the internet. In this regard the computing resource does not directly interact with the internet user. It can also be described as a server whose main function is to store and retrieve email messages. Frontend server: A frontend server is a computer resources that has exposed to the internet. Q.What is APIPA? Answer: Stands for Automatic Private IP Addressing APIPA is a DHCP fail over mechanism for local networks. With APIPA, DHCP clients can obtain IP addresses when DHCP servers are non-functional. APIPA exists in all modern versions of Windows except Windows NT. When a DHCP server fails, APIPA allocates IP addresses in the private range 169.254.0.1 to 169.254.255.254. Q.How Release and renew IP address from Command prompt. Answer: Ipconfig / release ipconfig / renew Q.What is wins server? Answer: Windows Internet Name Service (WINS) servers dynamically map IP addresses to computer names (NetBIOS names). This allows users to access resources by computer name instead of by IP address. If you want this computer to keep track of the names and IP addresses of other computers in your network, configure this computer as a WINS server. If you do not use WINS in such a network, you cannot connect to a remote network resource by using its NetBIOS name. Q.What is the Windows Registry? Answer: The Windows Registry, usually referred to as "the registry," is a collection of databases of configuration settings in Microsoft Windows operating systems. Q.System Volume Information (SVI) Folder? Answer: Windows XP includes a folder named System Volume Information on the root of each drive that remains hidden from view even when you choose to show system files. It remains hidden because it is not a normally hidden folder you can say it is a Super Hidden Folder. Windows does not shows Super Hidden Folders even when you select "Show Hidden Files." Q.What is MBR? Answer: Short form Master Boot Record, a small program that is executed when a computer boots up. Typically, the MBR resides on the first sector of the hard disk. The program begins the boot process by looking up the partition table to determine which partition to use for booting Q.What is Bit Locker. Answer: BitLocker is an encryption feature available in Ultimate and Enterprise versions of Windows 7 and Vista, To encrypt an entire drive, simply right-click on the drive and select Turn on BitLocker from the context menu. Q.Difference b/w sata and IDE. Answer: IDE and SATA are different types of interfaces to connect storage devices (like hard drives) to a computer's system bus. SATA stands for Serial Advanced Technology Attachment (or Serial ATA) and IDE is also called Parallel ATA or PATA. SATA is the newer standard and SATA drives are faster than PATA (IDE) drives. For many years ATA provided the most common and the least expensive interface for this application. But by the beginning of 2007, SATA had largely replaced IDE in all new systems. Q.Main Difference Between Windows server 2008 and 2012 Answer: 1).New Server Manager: Create, Manage Server Groups 2).Hyper-V Replication : The Hyper-V Replica feature allows you to replicate a virtual machine from one location to another with Hyper-V and a network connection—and Without any shared storage required. This is a big deal in the Microsoft world for disaster recovery, high availability and more. VMware does this, too, but the vendor charges new licensees extra for the capability. 3) Expanded PowerShell Capabilities 4)IIS 8.0 and IIS 7 in 2008 5)Hyper-V 3.0 6)PowerShell 3.0 Get practical explanation on Windows Server at Windows Server Online Training Q.How Long My Computer Has Been Running? Get to Know My Computer’s Uptime. Answer: Start Task manager,and select Performance tab . In performance tab we can see system up time Method 2: By typinag systeminfo in command prompt we can find out up time of your server In system boot time. Q.Event viewer in Windows server Control panel - Administrative tools - Computer Management - event Viewer Three types events Error. Warning. Information. Q.Manage Multiple, Remote Servers with Server Manager. Answer: Server Manager is a management console in Windows Server® 2012 R2 Preview and Windows Server® 2012 that helps IT professionals provision and manage both local and remote Windows-based servers from their desktops, without requiring either physical access to servers, or the need to enable Remote Desktop protocol (RDP) connections to each server. Although Server Manager is available in Windows Server 2008 R2 and Windows Server 2008, Server Manager was updated in Windows Server 2012, to support remote, multi-server management, and help increase the number of servers an administrator can manage. Q.What happens when we type URL in browser ? Answer: First the computer looks up the destination host. If it exists in local DNS cache, it uses that information. Otherwise, DNS querying is performed until the IP address is found. Then, your browser opens a TCP connection to the destination host and sends the request according to HTTP 1.1 (or might use HTTP 1.0, but normal browsers don't do it any more). The server looks up the required resource (if it exists) and responds using HTTP protocol, sends the data to the client (=your browser) The browser then uses HTML parser to re-create document structure which is later presented to you on screen. If it finds references to external resources, such as pictures, css files, javascript files, these are is delivered the same way as the HTML document itself. Q.How DHCP work? Answer: DHCP Stands for Dynamic host configuration protocol. DHCP is a protocol used for automatic configuration IP address in client computers connected to IP networks. DHCP operates on a client server model in four phases. Discover: A client broadcasts DHCP Discover message when it comes alive on the network. Offer: When a DHCP server receives the DHCP Discover message from the client, it reserves an I P address for the client and sends a DHCP Offer message to the client offering the reserved IP address. Request: The client receives the DHCP offer message and broadcasts a DHCP request message to show its consent to accept the offered IP address. Acknowledge: When the DHCP server receives the DHCP Request message from the client, it sends a DHCP Ack packet to the client. At this point the IP configuration process is complete. Q.What is DHCP Scope? Answer: A range of IP address that the DHCP server can assign to clients that are on one subnet . Q.What protocol and port does DHCP use ? Answer: UDP protocol and 67 port in client and 68 port in server. Q.What is a DHCP lease ? Answer: A DHCP lease is the amount of time that the DHCP server grants to the DHCP client permission to use a particular IP address. A typical server allows its administrator to set the lease time. Q.Can DHCP support statically defined addresses. Answer: Yes. Q.Define Dora Process & why it is used. Answer: Discover, Offer, request and acknowledgement. it is used to assign ip address automatically to client systems. Q.What is Authorizing DHCP Servers in Active Directory. Answer: If a DHCP server is to operate within an Active Directory domain (and is not running on a domain controller) it must first be authorized to Active directory. Windows Admin Interview Questions Windows Admin Training Q.How to Backup and Restore DHCP in Windows Server 2008 Answer: In Windows Server 2008, backup of DHCP database and settings has gotten simpler. You may want to backup your DHCP server from time to time to prepare for disaster recovery scenarios or when migrating DHCP server role to a new hardware. Backup DHCP Server 1).Open Server Manager > DHCP role 2).Right click server name, choose Backup.. 3).Choose a location for backup, click OK Restore DHCP Server 1).Open Server Manager > DHCP role 2).Right Click server name, choose Restore 3).Choose the location of the backup, click OK 4).Restart the DHCP Service DHCP Databse location: C:\WINDOWS\System32\DHCP directory Q.Define DNS . Answer: Domain Name System, DNS is an Internet service that translates domain names into IP addresses. Because domain names are alphabetic, they're easier to remember. Two types of lookup in DNS. Forward lookup : it converts Domain name to ip address. Reverse lookup: it converts ip address to Domain name. Three types of zone. Primary zone secandary zone and stub zone. Q.what is the port no of DNS. Answer: UDP and port number - 53 Q.What is NSlookup. Answer: Nslookup.exe is a command-line administrative tool for testing and troubleshooting DNS servers. This tool is installed along with the TCP/IP protocol through Control Panel. MS-DOS utility that enables a user to look up an IP address of a domain or host on a network. Q.What is LDAP? Why it is used. Answer: LDAP is the Lightweight Directory Access Protocol. Its an active directory protocal ,Basically, it's a protocol used to access data from a database Q.What is Active Directory? Why it used? Answer: Active Directory is a Directory Service created by Microsoft. It is included with most Windows Server operating systems. Active Directory is primarily used to store directory objects like users and groups and computers printers. Using Active Directory brings a number of advantages to your network, Centralized user account management Centralized policy management (group policy) Better security management Get practical explanation on Windows Server at Windows Server Online Courses Q.What Is Group Policy. Answer:Group Policy is a feature of the Microsoft Windows NT family of operating systems that control the working environment of user accounts and computer accounts. Group Policy provides the centralized management and configuration of operating systems, applications, and users' settings in an Active Directory environment. Q.What is the order in which GPOs are applied . Answer: Local Group Policy object site , Domain and organizational units. Q.What is the difference between software publishing and assigning. Answer: Assign Users : The software application is advertised when the user logs on. It is installed when the user clicks on the software application icon via the start menu, or accesses a file that has been associated with the software application. Assign Computers :The software application is advertised and installed when it is safe to do so, s uch as when the computer is next restarted. Publish to users : The software application does not appear on the start menu or desktop. This means the user may not know that the software is available. The software application is made available via the Add/Remove Programs option in control panel, or by clicking on a file that has been associated with the application. Published applications do not reinstall themselves in the event of accidental deletion, and it is not possible to publish to computers. Q.Can I deploy non-MSI software with GPO. Answer: Create the file in .zap extension. Get practical explanation at Windows Server Online Courses Q.Name some GPO settings in the computer and user parts. Answer: Computer Configuration, User ConfigurationName Q.Name a few benefits of using GPMC. Answer: Easy administration of all GPOs across the entire Active Directory Forest View of all GPOs in one single list Backup and restore of GPOs Migration of GPOs across different domains and forest. Q.How frequently is the client policy refreshed ? Answer: 90 minutes give or take. Q.Where are group policies stored ? Answer: C:\Windows\System32\GroupPolicy. Q.How to Do Group policy backup Answer: To backup a single GPO, right-click the GPO, and then click Back Up. To backup all GPOs in the domain, right-click Group Policy Objects and click Back Up All. Q.Define DSRM Mode? Directory Services Restore Mode (DSRM) is a special boot mode for repairing or recovering Active Directory. It is used to log on to the computer when Active Directory has failed or needs to be restored. To manually boot in Directory Services Restore Mode, press the F8 key repeatedly. Do this immediately after BIOS POST screen, before the Windows logo appears. (Timing can be tricky; if the Windows logo appears you waited too long.) A text menu menu will appear. Use the up/down arrow keys to select Directory Services Restore Mode or DS Restore Mode. Then press the Enter key. Q.Where is the AD database held? What other folders are related to AD? Answer: The AD data base is stored in c:\windows\ntds\NTDS.DIT . Q.How you ever Installed AD? Answer: To Install Microsoft Active Directory: Ensure that you log on to the computer with an administrator account to perform installation. Click electing Start > Administration Tools >Server manager > Configure your Server. In the Welcome page, click Next. In the Operating system compatibility panel, click Next. On the Domain Controller Type panel, select Domain controller for a new domain and click Next. On the Create New Domain panel, select Domain in a new forest and click Next. On the New Domain Name panel, enter the DNS suffix for your new Active Directory. This name will be used during Tivoli Provisioning Manager installation, so make a note of it. Click Next. On the NetBIOS Domain Name panel, enter theNetBIOS name of the domain. The first part of the DNS name is usually sufficient. Click Next. On the Database and Logs panel, select the desired folders for the Database and Logs.C:\Windows\NTDS is the default. Click Next. On the Shared System Volume panel, enter a valid directory for the system volume.C:\Windows\Sysvol is the default. Click Next to continue. If you configured DNS successfully, the Permissions setting panel is displayed. Select Permissions compatible only with Windows 2000 or Windows Server 2003. Click Next. On the Directory Services Restore Mode Administrator Password panel, enter a valid password to be used when running the Directory Services in Restore Mode. Click Next Verify the settings and Click Next to begin the Active Directory configuration. The server will be rebooted as part of the process. Q.What is the use of SYSVOL folder? Answer: All active directory data base security related information store in SYSVOL folder and it’s only created on NTFS partition. Q.What is global catalog? Answer: The Global Catalog is a database that contains all of the information pertaining to objects within all domains in the Active Directory environment Q.What is the difference between local, global and universal groups Answer: Domain local groups assign access permissions to global domain groups for local domain resources. Global groups provide access to resources in other trusted domains. Universal groups grant access to resoures in all trusted domains. Q.What is group nesting. Answer: Adding one group as a member of another group is called 'group nesting'. This will help for easy administration and reduced replication traffic Q.What is Domain control? Answer: A domain controller (DC) is a server that handles all the security requests from other computers and servers within the Windows Server domain there was a primary domain controller and a backup domain controller. The primary DC focused on domain services only to avoid the possibility of a system slow down or crash due to overtasking from managing other functionality and security requests. In the event of a primary DC going down, a backup DC could be promoted and become the primary DC to keep the rest of the server systems functioning correctly Q.What is domain? Answer: A domain is a set of network resources (applications, printers, and so forth) for a group of users. The user needs only to log in to the domain to gain access to the resources, which may be located on a number of different servers in the network. The ‘domain’ is simply your computer address not to confuse with an URL. A domain address might look something like 211.170.469. Q.What is Forest? Answer: collection of one or more Active Directory domains that share a common schema, configuration, and global catalog. Q.What is global catalog. Answer: The Active Directory Global Catalog is the central storage of information about objects in an Active Directory forest. A Global Catalog is created automatically on the first domain controller in the first domain in the forest. The Domain Controller which is hosting the Global Catalog is known as a Global catalog server. Q.What is tree. Answer: An Active Directory tree is a collection of Active Directory domains that begins at a single root and branches out into peripheral, child domains. Domains in an Active Directory tree share the same namespace. An Active Directory forest is a collection of Active Directory trees, similar to a real world forest. Catalog Server. Q.What is site. Answer: A Site object in Active Directory represents a geographic location that hosts networks. Q.Flexable Single Master Operation Roles (FSMO) Answer: The 5 FSMO server role Schema Master Forest Level One per forest Domain Naming Forest Level One per forest Master PDC Emulator Domain Level One per domain RID Master Domain Level One per domain Infrastructure Master Domain Level One per domain Q.Cmmand to Add client to Domain Answer: NETDOM /Domain:MYDOMAIN /user:adminuser /password:apassword MEMBER MYCOMPUTER /JOINDOMAIN Q.Setting File Permissions on a Folder Using Group Policy Answer: The setting is located under Computer Configuration, Windows Settings, Security Settings, File System. Here's the procedure: Go to the location in the Group Policy listed above. Right-click File System. Click Add File. In the "Add a file or folder" window, select the folder (or file) for which you want the permissions to be set, and click OK. In the security box that pops up, you can add a user or a group that needs permission to the folder. Q.Define virtualization. Answer: Hyper-V virtualization will provide an environment in which we can run multiple operating systems at the same time on one physical computer, by running each operating system in its own virtual machine. Q.What are the benefits of virtualization ? Answer: Reduce the number of physical servers Reduce the infrastructure needed for your data center Q.What is a Hypervisor. Answer: You can think of a Hypervisor as the kernel or the core of a virtualization platform. The Hypervisor is also called the Virtual Machine Monitor. The Hypervisor has access to the physical host hardware. Q.What are a host, guest, and virtual machine. Answer: A host system (host operating system) would be the primary & first installed operating system. If you are using a bare metal Virtualization platform like Hyper-V or ESX, there really isn’t a host operating system besides the Hypervisor. If you are using a Type-2 Hypervisor like VMware Server or Virtual Server, the host operating system is whatever operating system those applications are installed into. A guest system (guest operating system) is a virtual guest or virtual machine (VM) that is installed under the host operating system. The guests are the VMs that you run in your virtualization platform. Some admins also call the host & guest the parent and child. Q.How to create Hyper v Snap shot? Answer: Just select the Virtual machine in Hyper-V Manager and select Snapshot from the Actions pane. The status of the virtual machine will change to “Taking Snapshot” and show the progress of the action using a percentage value File extension = .avhd Virtual Machine files The first thing to know is what files are used to create a virtual machine: .XML files These files contain the virtual machine configuration details. There is one of these for each virtual machine and each snapshot of a virtual machine. They are always named with the GUID used to internally identify the virtual machine or snapshot in question. .BIN files This file contains the memory of a virtual machine or snapshot that is in a saved state. .VSV files This file contains the saved state from the devices associated with the virtual machine. .VHD files These are the virtual hard disk files for the virtual machine .AVHD files These are the differencing disk files used for virtual machine snapshots contact for more on Windows Admin Training
Continue reading