Differences Between Blue Prism and Automation Anywhere

Introduction to Blue prism

A blue prism is a famous tool that is used for robotic process automation purposes.

Blue prism uses a vast amount of integration experience and technologies to their system and these technologies that are used are secure, robust and secure. It has got a technology adaptor that is used for each and every technology that is employed at Java, presentation layer, windows a, and a green mainframe or screen, Citrix and also web.

At numerous level of robotic process automation platform of the blue prism, auditability and security are built. It has got permission to create, design, run and edit processes. Each user who is authorized has specific business objects.

Why should we select blue prism?

Blue prism can be selected as an RPA tool because of the following reasons.

  • With the help of blue prism, you will be able to design automation process that too within IT governance.
  • Internal and external decryption and encryption keys are supported by the blue prism.
  • The user of the blue prism will be provided with audit logs enabling.
  • Within the process automation, you are provided with a customized code of.Net that provides a high rate of robustness.

Blue Prism Studio & its Applications

Automation Anywhere

Automation Anywhere can be defined as a robotic process automation that is a software developer. The product of the company is known as automation of anywhere and it helps those organizations who are seeking options for deploying digital workforce that is made of software bots so that it helps in completing end to end business processes. The combination of traditional robotic process automation method with cognitive elements like reading of unstructured data and natural language processing is done by author of automation anywhere enterprises.

This automation anywhere helps in saving time for performing quality control procedures on IT, web applications, Windows and system administration and also business-related processes.

Automated testing of software

When you are testing software, it can soon become a tiresome job when you have to repeat every task again and again as it happens in isolating bugs, regression testing and also in loggings. To solve this problem you can make use of an automated tool for testing in order to save time and effort but it will provide you with the same level of analytical skills. In order to overcome this problem, automation anywhere was invented which can go work on good analysis level and also deal with problems related to bug isolation, logging problems and also regression testing.

Recorders for automation anywhere

The recorder that is needed for Windows applications

The recorder that is used for Windows applications is known as the basic recorder and it is the most simple of the lot. You will be able to record, save and then run the testing actions with the help of this. The actions that are performed by the user is recorded on the basis of control coordinates that are relative to the windows.

Web recorder

For the applications of the web platform, there is a cross-browser which is know as web recorder. With the help of this recorder, the user can increase the display those actions that are related to the control while it is being recorded.

Object recorder

When compared with web recorder and basic recorder, object recorder is much more advanced. The recording action with this object recorder is done on the basis of the attributes of the control instead of the coordinates that are relative to the window. Features like the caption, index, and attributes are used to identify the elements. Due to the enhancement, the replay mode of the test related cases is more reliable. Also, the operation is based on the on control performance of capturing instead of localization on pages.

Automation anywhere task editor

With the help of task editor provided by automation anywhere, you will be able to alter, break down and also enhance the tasks that are recorded. The features of the task editors are listed here.

1. Keystroke and mouse
2. Windows and program files
3. Logging into file
4. Managing of window controls
5. System
6. Loops and conditions
7. Delays and pause
8. Recognition of images
9. Extraction of web data from both structured and unstructured.
10. Finding of broken links
11. Actions related to FTPS
12. Integrations of scripts and some other tasks
13. Excel and database
14. Variables
15. And lastly, error handling

Automation Anywhere is a very useful tool that has many features which are easy to learn as well. In order to build test cases gradually with the help of automation anywhere, the user can drag and drop items taking them from the toolbox. With the help of automation anywhere, you will be able to generate tasks that are straightforward. This will provide you with the opportunity of sharing of automated tasks by users that are a central repository.

Automation anywhere

Comparison between Automation Anywhere and Blue Prism

The first point that we should consider when comparing automation anywhere and blue prism is how they are learned. When it comes to the blue prism, the user should have knowledge about programming languages and he or she should also have the ability to produce business-related objects and should also be able to manage these products in control center. Now we come to automation anywhere, it is developer friendly and the user should be a basic developer.

When it comes to re-usability for blue prism the user can reuse objects in several multiple processes provided that these business objects exist in the library of the blue prism. Now for automation anywhere, there is a product feature known denoted as the smart adapter. With the help of this, the user is able to create unique automation blocks that are re-usable.

Now the third point on the basis of which comparison will be made is cognitive capability. The cognitive capability of the blue prism is low whereas the automation anywhere has a medium cognitive capability.

When it comes to accuracy, blue prism will provide its users with a web, desktop, and Citrix automation. On the other hand, automation anywhere will provide its user with accuracy that is reasonable throughout the mediums.

Automation anywhere provides the users with both back office and front office robots but blue prism only has the facility of back-office automation. This is also known as batch automation or unattended automation.

From the operational scalability point of view, automation anywhere has limited capacity to handle large-scale deployment of robots. Blue prism has a high speed of execution and it has a very good capability of handling deployment of robots at a large scale. In the blue prism, you will be able to view each and every data that is assigned at the runtime time and you can also step into, out of abs over components in a very easy way because blue prism provides the option of superior debugging.

Now, the comparison should be made on the basis of recorders or macro readers. Automation Anywhere has a faster rate of the mapping process. In this case, it is optional to make use of recording the action and it can also be tweaked. The user is provided with the option of web record, screen record, and smart record. The blue prism does not have the option of recorders or macro readers. Since business objects can be scalded at a faster and easier rate so Blue prism in automation makes use of business objects instead of reusable scripts.

The architecture of automation anywhere is client-server and also blue prism had a client-server architecture.

Both blue prism and automation anywhere have the accessibility that is only application based.

The process design of automation anywhere is script based. On the other hand, the blue prism has complete control and visual process designer. It has also got an approach that is point blank. The implementing speed is very high and it can be controlled with the help of anchoring process of automation that is browser based.

The base technology of automating anywhere is Microsoft. On the other hand, the base technology of blue prism is C#.

Automation Anywhere has a high rate of reliability and blue prism also has a very high rate of reliability.

When it comes to pricing, automation anywhere has a high rate cost for deployment. Now for blue prism, it has a higher rate of acquisition. It provides limited training that is available through business objects only. The training cost is also high and it costs almost twice as UiPath.

Now, the last point of comparison is education and certification. The automation anywhere is launched recently. Now when it comes to the blue prism, it contains three certifications namely, service provider, delivery provider and capability provider and each of these certifications have silver and gold levels. Also on the blue prism, documentation is not available on the platform of open forums.

Conclusion

Based on the mentioned comparisons, it is easier to select which type of RPA tool that will be good for your organization and also has a higher rate of customer satisfaction. So it advisable to make a thorough study before making a decision that will be advantageous for your company.

Agile Tutorials

Welcome to the Agile  Tutorials. The objective of these tutorials is to get in depth understanding of Agile.

In addition to these tutorials, we will also cover common issues, Interview questions and How To’s of Agile.

Introduction

Agile Project Management is one of the revolutionary methods introduced for the practice of project management. This is one of the latest project management strategies that is mainly applied to project management practice in software development. Therefore, it is best to relate agile project management to the software development process when understanding it.

From the inception of software development as a business, there have been a number of processes following, such as the waterfall model. With the advancement of software development, technologies and business requirements, the traditional models are not robust enough to cater the demands.

Therefore, more flexible software development models were required in order to address the agility of the requirements. As a result of this, the information technology community developed agile software development models.

Agile‘ is an umbrella term used for identifying various models used for agile development, such as Scrum. Since agile development model is different from conventional models, agile project management is a specialized area in project management.

The General Principles Of The Agile Method

-Satisfy the client and continually develop software.

-Changing requirements are embraced for the client’s competitive advantage.

-Concentrate on delivering working software frequently. Delivery preference will be placed on the shortest possible time span.

-Developers and business people must work together throughout the entire project.

-Projects must be based on people who are motivated. Give them the proper environment and the support that they need. They should be trusted to get their jobs done.

-Face-to-face communication is the best way to transfer information to and from a team.

-Working software is the primary measurement of progress.

–Agile processes will promote development that is sustainable. Sponsors, developers, and users should be able to maintain an indefinite, constant pace.

-Constant attention to technical excellence and good design will enhance agility.

-Simplicity is considered to be the art of maximizing the work that is not done, and it is essential.

-Self-organized teams usually create the best designs.

-At regular intervals, the team will reflect on how to become more effective, and they will tune and adjust their behavior accordingly.

Advantages of Agile

The Agile Method grew out of the experience with the real-life projects of leading software professionals from the past. Because of this, the challenges and limitations of traditional development have been discarded. Subsequently, the Agile Method has been accepted by the industry as a better solution to project development. Nearly every software developer has used the Agile Method in some form.

This method offers a light framework for assisting teams. It helps them function and maintain focus on rapid delivery. This focus assists capable organizations in reducing the overall risks associated with software development.

The Agile Method ensures that value is optimized throughout the development process. The use of iterative planning and feedback results in teams that can continuously align a delivered product that reflects the desired needs of a client. It easily adapts to changing requirements throughout the process by measuring and evaluating the status of a project. The measuring and evaluating allows accurate and early visibility into the progress of each project.

It could be stated that the Agile Method helps companies build the right product. Instead of trying to market software before it is written, the Agile Method empowers teams to optimize the release during its development. This allows the product to be as competitive as possible within the marketplace. It preserves the relevance of the critical market, and it ensures that a team’s work doesn’t wind up collecting dust on a shelf. This is why the Agile Method is an attractive developmental option for stakeholders and developers alike.

There are many critics of the Agile Method; however, this method produces results that clients can take to the bank. Although a project may not turn out exactly as the client envisions, it will be delivered within the time that it needs to be produced. Throughout the process, the client and the team are changing the requirements in order to produce the quality needed by the client. Clients are happy with the results, and the team satisfies the client’s needs. The ongoing change can sometimes give both the client and the team more than they had originally envisioned for the product. The Agile Method really is a winning solution for everyone involved in software development.

Conclusion

In agile projects, it is everyone’s (developers, quality assurance engineers, designers, etc.) responsibility to manage the project to achieve the objectives of the project.

In addition to that, the agile project manager plays a key role in agile team in order to provide the resources, keep the team motivated, remove blocking issues, and resolve impediments as early as possible.

In this sense, an agile project manager is a mentor and a protector of an agile team, rather than a manager.

JIRA Agile and Workflows

JIRA Agile, formerly known as GreenHopper, is a popular JIRA add-on by Atlassian that allows you to manage issues in a more agile way. This means Kanban, Scrum, sprints, burndown charts, and a whiteboard-like active sprints board where you can drag issues from status to status. You don’t even have to be working in an agile way to appreciate it. So it’s very popular indeed.
JIRA Agile cards are simply JIRA issues displayed differently, so of course each card uses a JIRA workflow. The difference is that JIRA Agile lets you assign multiple statuses to columns on the Active Sprints (Work) board. For example, the column named To Do could have issues in the Open and Reopened statuses. When designing a workflow for use in a JIRA Agile board, there are a few points to remember.
Columns
Define the names for your JIRA Agile columns and decide which JIRA statuses should be mapped to each column. You can have statuses that don’t map to any of your columns, and in fact this is sometimes a useful way to keep issues from appearing on a board.
Transitions
If you want to be able to drag an issue from any column to any other column, you will need to have a transition from every status to every other status. A regular JIRA workflow is often shaped like a ladder or diagonal, but a workflow for use in JIRA Agile is more like a fully connected circle. You’ll at least need a transition to one of the statuses in every column to allow cards to be dragged. This is an O(N2) number of transitions. A better approach is to define a common transition for entering each status. That way you only have to define as many as the number of statuses in the workflow. The Agile Simplified workflows that JIRA Agile can create are just the statuses with a global transition to every status. This means that you’ll see transitions back to the same status in every issue which can confuse some users. You also need to check that the system Resolution field is set properly in these generated workflows. You can add multiple conditions to a transition and even have levels of combinations of them. However I recommend avoiding complicated conditions that restrict who can change an issue’s status, or you’ll have users who are frustrated that they can drag issues to some columns but not other columns.
Transition Screens
Many people like to avoid transition screens with JIRA Agile. When they’re dragging issues from one screen to another, they don’t want to have pop-ups interrupting them. This is fine except for transitions that need to set a resolution. The resolution can be set in a post function, but which resolution to choose? I recommend choosing the most frequent resolution, then add another transition between the same two statuses with a transition screen that lets you choose a resolution. When the issue is dragged to a column where the resolution should be set, two small areas will appear named after each transition. One could be Completed since it sets the resolution with no pop-up screen, and the other could be named something such as Close – Other.
Notifications
People drag issues around on the work board without realizing that they are sending out email about status changes. You may want to use a custom event for some transitions and prevent the notification scheme from sending out email for that event.

Implementing a Workflow

Once you have the names and descriptions of the statuses and transitions, you can create the new workflow at Administration→Issues→Workflows. For ideas on naming the workflow, see the sections “Workflow Schemes” and “Workflow Scheme”.
JIRA has included some kind of graphical workflow designer since version 4.4. To use the text-based editor in JIRA 5 you have to click on the column that shows the number of steps in a workflow or the Text link. Since JIRA 6, there is a more obvious Text button to allow users to work with the text editor.
The graphical editor is an improvement on the original workflow editor, which had also some limitations about what it could do with draft workflows (see “Further Reading”). I confess I still create and edit workflows using the original text editor though, perhaps from long familiarity. However there’s no other way than the Workflow Designer to produce workflow diagrams.
First, create new statuses as necessary at Administration→Issues→Statuses, or in the graphical designer Statuses area.
If you are not using the graphical workflow designer, then add the statuses to the workflow in the expected order of their most frequent use.
Caution: A workflow is actually made up of steps, and each step has just one status associated with it. For simplicity make the step names the same as the status names — otherwise, your users will see discontinuities in a few places in JIRA.
JIRA will have added a first step named Open. After you add other steps you will be able to make any one of them your initial status, and can then delete the original step that JIRA added for you. To change the initial status, click on the Open step name, then the Create Issue transition, then Edit, and finally change the Destination Step to the new initial status.
Next, add the transitions away from the first status, also in their expected order of use.
For each transition, after you’ve entered the name and description, check which triggers, conditions, validators, and post functions are wanted, and add them. If you see a number after a transition name, that’s a unique ID for that transition. This can help to see which transitions are shared (common) or global
I recommend changing the event fired in the post function from Generic Event to something more informative, even if it’s only Issue Updated. This can also be used reduce the amount of email sent when an issue is updated as described in “Workflows and Events”.
Check that there is a Transition permission condition. This controls who can change the status of an issue. Earlier versions of JIRA didn’t restrict this except by adding a condition to each transition. In that case I recommend adding a Permission Condition to check for Edit permission and make this behave as expected. Adding such a condition also makes it easier to make a JIRA project properly read-only.
Tip: You are allowed to have self transitions back to the same status if you want to. This is one way to narrowly restrict what is changed in an issue, and is used in the section “Resolution”.
The default JIRA workflow has some triggers, conditions, validators, and post functions that are worth knowing about:
  • The initial Create Issue transition into Open has a validator to check that the user has the Create Issues permission.
  • Transitions have a condition that checks for the Transition Issues permission.
  • The Start Progress transition has a condition to check that the current user is the issue’s assignee. Other users won’t see this transition as a choice. Other variants of the default workflow assign the issue to the current user during this transition.
  • The Closed status has the jira.issue.editable property set to false which means that issues with this status can’t be edited.
  • Many statuses and transitions have a property jira.i18n.title which is used to get the actual name. If you’re having problems renaming something, look for this property, and either delete it or translate the status’ name at Administration→Issues→Statuses.
There are five post functions that are added by default to new transitions, but only one of these is editable: Fire Generic Event. Events are discussed later in “Workflows and Events”.

Deploying and Testing a Workflow

When a workflow is created from scratch, there is of course no project or issue type that is using it, so it’s inactive. Recent versions of JIRA display workflows in active and inactive sections, and the inactive section is not expanded. If you can’t find the workflow you just created, expand the inactive section before searching for it on your browser page.
The first step towards making a workflow active is to create a workflow scheme to define which issue types use each workflow. For instance, tasks (issues with issue type Task) could have a different workflow from bugs, which have an issue type of Bug. See “Workflow Scheme” for details on a recommended way to do this.
Once you have a workflow scheme that refers to the new workflow, you can edit a JIRA project to use the workflow scheme (go to Administration→Projects and click on the project name). Then go to Workflows and click Switch Scheme.
Now when you create a new issue of the specified type in that project, you should see that the status of the issue is the one that you chose as the initial status. The available workflow choices for the issue should be the transitions that you defined as possible from that status. The permission View Read-only Workflow allows people to see an image of each issue’s workflow. This is so helpful that it’s worth adjusting the workflow diagram in the graphical editor to make it clearer.
To test the workflow, execute the transitions between all the statuses, checking for usability errors as well as any actual failures or error messages in the JIRA log files. Check that any custom triggers, conditions, validators, or post functions behave as expected. Manually testing all the different combinations of transitions and user permissions is only really possible for small to medium-sized workflows.
To make a change to a workflow once it is in use and active, you have to create a draft of the workflow (the graphical designer will do this automatically), edit the draft, and finally publish the draft. The option of saving a copy of the original workflow is offered when the workflow is published, and can be useful if version numbers are added to the workflow name. However, I generally find it leads to too many copies of old workflows, so I don’t use it very often.
Some changes to a workflow can’t be done by creating a draft. For the following changes you have to create a copy of the workflow, edit the inactive copy and then change the workflow scheme to use the copy.
  • A workflow’s name cannot be changed, though the description can.
  • Statuses cannot be removed.
  • A status can only have new transitions added from it if it already has at least one outgoing transition. So dead-end statuses cannot have an outgoing transition added.
  • Changing the name of a step used for a status. Generally the step name should be the same as the status name.
One thing that’s currently missing in JIRA is a way to compare two versions of the same workflow. When I really want to be sure of what has changed, I export the workflow’s XML before and after the change and then compare the two files using a diff tool, preferably one that understands XML.

Workflows and Events

JIRA sends software events internally when issues are changed. Some of these events are hardcoded, such as the one sent when an issue’s assignee changes. However, events sent during a transition are designed to be configurable. Many of the events listed at Administration→System→Events are really intended for use in workflows. For example, the Work Started on Issue event is intended to be sent (fired) by a post function on all transitions into the In Progress status.
The standard post function Fire Generic Event can be edited to send a more appropriate event when a transition executes. The main reason that a JIRA administrator cares about what type of events are sent is because they are used by a project’s Notification Scheme (see “Notification Schemes”), which controls who receives email when the status of an issue changes.
You can also add new types of events to JIRA at Administration→System→Events, as described in detail at https://confluence.atlassian.com/display/JIRA/Adding+a+Custom+Event.
The ability to create new events and have your workflow fire them off instead of the Generic event or some other standard event can be useful for trimming JIRA spam. For example, if you really want to fine-tune who receives email when an issue changes status, you can define a new event type for each transition, perhaps giving them highly descriptive names such as Task Workflow: Open to Resolved Event. (The event names don’t appear in email templates.) Then you can edit the transition from Open to Resolved, and change its post function to fire the appropriate new event. In a custom notification scheme, you can then specify which users will receive email for precisely that one transition and no other transitions.

IBM WebSphere Application Server (WAS) V9.0 Tutorial

WebSphere Application Server, often referred to simply as WAS, is a JEE-compliant application server platform. JEE stands for Java Enterprise Edition and was previously referred to as J2EE. JEE application servers provide functionality to deploy fault-tolerant, distributed, and multi-tier Java software. They also provide the runtime environment and management interface to manage the many modular components that make up JEE applications. Before we begin to look into the specifics of WebSphere Application Server 8 administration, it is important to understand what the product is, why it is often the product of choice to provide a base for an enterprise JEE SOA (Service Oriented Architecture) along with support for the many Java-based standards, and how an organization can benefit from using WAS. We also need to cover some specific WAS terminology and concepts used throughout the tutorial.

What is WebSphere Application Server?

IBM WebSphere Application Server, is IBM’s answer to the JEE application server. WAS first appeared in the market as a Java Servlet engine in June 1998, but it wasn’t until version 4 (released in 2001) that the product became a fully JEE 1.2-compliant application server.

Over the last 10 years, since version 1.2 was released, IBM has invested heavily in WAS and it is developed with open industry standards in mind such as Java EE, XML, and Web Services. WebSphere Application Server is now IBM’s flagship for the WebSphere brand and forms the base of many of IBM’s extended product range.

The latest release of WebSphere Application Server version 8, is a JEE 6-compliant application server. Every new version is required to provide improved efficiency and continued compliancy with standards, allowing customers who invest in WAS to make use of the new Java capabilities of each new JEE release.

When choosing an application server platform on which to run applications and services, architects and developers need to know that WAS will support new JEE features and improved coding practices. WAS has evolved as a product with each new update of the JEE standard, and IBM has continued to provide new versions of WAS to support available features of each new JEE release.

The following table shows a simple comparison of current and previous WAS versions and its compliancy to JEE specifications:

           Version                                                    Compliancy

J2EE EJB Servlet JSP
WebSphere 8 6 3.1 3.0 2.2
WebSphere 7 5 3.0 2.5 2.1
WebSphere 6.1 1.4 2.1 2.4 2.0
WebSphere 6 1.4 2.1 2.4 2.0
WebSphere 5.1 1.3 2.0 2.3 1.2
WebSphere 5 1.3 2.0 2.3 1.2
WebSphere 4 1.2 1.1 2.2 1.1
WebSphere 3.5 1.2 1.0 2.1 1.0

 

Why choose IBM WebSphere Application Server?

JEE is an ever-changing world, and as soon as a new application server is released by IBM, new standards and approaches become available, or they become the preferred method of choice by the JEE community. Organizations who have invested in JEE technology require an application server platform that allows them to extend their existing legacy systems, and provide services-based frameworks on which their enterprise applications and systems can be based. So there is a continuing need for IBM to facilitate all the facets of the new JEE enterprise features, namely JMS, Web Services, Web Applications, and Enterprise JavaBeans, ensuring their product continues to innovate and provide the ability for their customers to extend their own core systems.

IBM is committed to ensuring WAS negates the need for complex architectures, while at the same time providing a platform for servicing business applications, process automation/workflow, and complex bus topologies as required. The WAS product is continually being updated and improved to bring in new technologies as they are released or accepted by the community as a whole.

WAS can be considered the base of your enterprise JEE application service provisioning toolbox and can be extended with custom business solutions as required. Developers and architects want to ensure that their application designs use the latest JEE standards and programming models. Reading through the WAS product specification sheet, which can be downloaded from HTTP://WWW.IBM.COM/DEVELOPERWORKS/DOWNLOADS/WS/WAS/, you can see that there are many new features in WebSphere Application Server version 8 supporting many industries JEE API’s (Application Programming Interfaces) and standards.

Let’s now turn to a quick, but not so brief, overview of the new capabilities under WebSphere 8.

Note

Not all new JEE features are chosen by IBM to be fully supported in the new versions of WAS. IBM assesses every new specification and determines the features they will implement. Sometimes their decision can be entirely commercial, that is how they can implement an IBM-specific solution within the bounds of WebSphere; other times they are influenced by their customers and/or industry needs.

Frequently Asked IBM WAS Interview Questions

New features

There have been many internal product improvements for efficiency in both resource management and administration time-saving. The following table gives an overview of new enhancements to WAS realized in version 8:

Feature/Capability Description
Monitored deployments  New monitored directory-based application install, update, and uninstall of Java EE application.
HPEL New High-Performance Extensible Logging (HPEL) problem determination tools and enhanced security and administration features to improve administrator productivity and control.
Updated installation process New simplified install and maintenance through IBM Installation Manager to improve efficiency and control.
Workload efficiency Run the same workload on fewer servers, creating savings of 30 percent due to updates in the performance for EJB and web services.
Improved performance and high availability with WebSphere MQ Messaging is a key part of any enterprise both in Java’s JMS and IBM’s specific messaging platform called WebSphere MQ. WAS continues to provide ease of integration with MQ.
Security domains have been improved to offer more secure protection for services provided by WAS.
Simplified exchange of user identity and attributes in Web Services using Security Assertion Markup Language (SAML) as defined in the OASIS Web Services Security SAML Token Profile Version 1.1.
Security hardening SAML assertions represent user identity and user security attributes, and optionally to sign and to encrypt SOAP message elements.

The Organization for the Advancement of Structured Information Standards (OASIS) is a global consortium that drives the development, convergence, and adoption of e-business and web service standards.

Web Services Security API (WSS API) and WS-Trust support in JAX-WS to enable customers building a single sign on Web services-based applications.

The WSS API supports Security token types and deriving keys for signing, signature and verification, encryption, and decryption.

Auditable security events are security events that have audit instrumentation added to the security run time code to enable them to be recorded to logs for review.
Enhanced cookie support to reduce cross-site scripting vulnerabilities and also better support for security, for example, SSO (Single Sign On) and LPTA (Lightweight Third Party Authentication).

Security auditing enhancements Enhanced security configuration reporting, including session security and Web attributes.

Additional security features enabled by default.

Security enhancements required by Java Servlet 3.0.

Java Authentication SPI for Containers (JSR 196) support, which allows third-party authentications for requests or responses destined for web applications.

Configure federated repositories at the domain level in a multiple security domain environments.

Performance improvements JPA L2 cache and JPA L2 cache integration with the DynaCache environment.
New caching features functionality for servlet caching, JSP, web services, command cache, and so on.
Improved migration support Better support for migrating applications deployed to WebSphere Application Server 6.0, 6.1, and 7.0.
The command line tools and GUI wizard have been improved.
JDBC (Java Database Connectivity)                                                                                             New and upgraded providers for database connectivity support for JDBC.
 Architecture And Internals

We have mentioned that WebSphere Application Server 8 has been developed to adhere to the new JEE 6 specification. We will now quickly look at what JEE 6 is made up of, so we can see how WAS maps out.

JEE 6 Server architecture model

It is important for a WAS 8 administrator to have a good awareness of the JEE 6 server architecture model. Let’s look at Java EE 6 and quickly run though the internal JEE containers. This should give you an insight and understanding into what WebSphere 8 has to offer in the way of JEE 6 support for these containers. We cannot delve into every API/Standard of JEE 6 as we are here to learn WebSphere Application Server, but I think the overview of the containers will help provide context for the specific features of the JEE specification.

Java EE containers

The JEE specification outlines four types of container, as shown in the following diagram. These containers form the guidelines of the services, which are to be provided by a JEE application server as implemented by a software vendor like IBM:

Note

Not all new JEE features are chosen by IBM to be fully supported in the new versions of WAS. IBM assesses every new specification, and determines the features they will implement. Sometimes their decision can be entirely commercial, that is how they can implement an IBM-specific solution within the bounds of WebSphere; other times they are influenced by their customers and/or industry needs.

New features

There have been many internal product improvements for efficiency in both resource management and administration time saving. The following table gives an overview of new enhancements to WAS realized in version 8:

Architecture And Internals

We have mentioned that WebSphere Application Server 8 has been developed to adhere to the new JEE 6 specification. We will now quickly look at what JEE 6 is made up of, so we can see how WAS maps out.

JEE 6 Server architecture model

It is important for a WAS 8 administrator to have a good awareness of the JEE 6 server architecture model. Let’s look at Java EE 6 and quickly run though the internal JEE containers. This should give you an insight and understanding into what WebSphere 8 has to offer in the way of JEE 6 support for these containers. We cannot delve into every API/Standard of JEE 6 as we are here to learn WebSphere Application Server, but I think the overview of the containers will help provide context for the specific features of the JEE specification.

Java EE containers

The JEE specification outlines four types of container, as shown in the following diagram. These containers form the guidelines of the services, which are to be provided by a JEE application server as implemented by a software vendor like IBM:

Note

A JEE application will use one or more of the previous four components, that is an application can simply be a web application running in the Web Container alone, or a JEE application can be more complex and contain both Web components and EJB components, and so more than one container can be used in serving an application.

Applet container

The Applet container manages Java applets. An Applet is a Java program that can be embedded into a web page. Most web pages are rendered using HTML/XML-based technology. By embedding the tagsand a browser will load a Java applet, which can use the Java AWT/Swing interface APIs, allowing a traditional client-like application to run within the browser. The Applet container manages the execution of the applet, and contains the web browser.

Web container

The Web container, also known as a Servlet container, provides web-related services. In a nutshell, this is the component of a web-server which serves web content, web-services, facilitates web-security, application deployment, and other key services. The following diagram shows the availability of the Java EE 6 APIs in the web container:

EJB Container

The EJB (Enterprise JavaBean) container manages the services of the EJB API and provides an environment for running the enterprise components of a JEE application. Enterprise JavaBeans are used in distributed applications, and facilitate transaction services and appropriate low-level implementations of transaction management and coordination, as required by key business processes. They are essentially the business components of an application.

The EJB container also manages database connections and pooling, threads, and sockets on behalf of enterprise beans, as well as state and session management. The following diagram shows the availability of the Java EE 6 APIs in the EJB container:

Application Client Container

An application client runs on a user’s client machine and provides a traditional rich Graphical User Interface (GUI) created from the Swing or the Abstract Window Toolkit (AWT) API. Application client’s access enterprise beans running in the business tier—which we explained earlier—run in the EJB container. An application client can use RMI (Remote Method Invocation) or other protocols, such as SOAP (Simple Object Access Protocol) , over HTTP (Hypertext Transfer Protocol). The following diagram shows the Java EE 6 APIs within the application client container:

Inside WebSphere Application Server

Before we look at installing WAS and deploying an application, we will quickly run over the internals of WAS. The anatomy of WebSphere Application Server is quite detailed so, for now, let’s briefly outline some of the more important parts, discovering more about the working constituent parts as we work through each of the remaining chapters.

The following diagram shows the basic architecture model for a WebSphere Application server JVM:

JVM

All WebSphere Application Servers are essentially Java Virtual Machines (JVMs). IBM has implemented the JEE application server model in a way that maximizes the JEE specification, and also provides many enhancements creating specific features for WAS. JEE applications are deployed to an Application Server.

Web container

A common type of business application is a web application. The WAS web container is essentially a Java-based web server contained within an application server’s JVM, which serves the web component of an application to the client browser.

EJB container

Applications need not only comprise of web components. In a more complex enterprise-based application, business objects are created to provide a layer of abstraction between a web application and the underlying data. The EJB container provides the services required to manage the business components as implanted with EJBs.

Virtual hosts

A virtual host is a configuration element that is required for the web container to receive HTTP requests. As in most web server technologies, a single machine may be required to host multiple applications and appear to the outside world as multiple machines. Resources that are associated with a particular virtual host are designed not to share data with resources belonging to another virtual host, even if the virtual hosts share the same physical machine. Each virtual host is given a logical name and assigned one or more DNS aliases by which it is known. A DNS alias is the TCP/host name and port number that is used to request a web resource, for example,:9080/.

By default, two virtual host aliases are created during installation. One for the administration console called admin_host and another called default_host, which is assigned as the default virtual host alias for all application deployments, unless overridden during the deployment phase. All web applications must be mapped to a virtual host, otherwise web browser clients cannot access the application that is being served by the web container.

Environment settings

WebSphere uses Java environment variables to control settings and properties related to the server environment. WAS variables are used to configure product path names, such as the location of a database driver, for example, ORACLE_JDBC_DRIVER_PATH, and environmental values required by internal WAS services and/or applications.

Resources

Configuration data is stored in XML files in the underlying configuration repository of the WebSphere Application Server. Resource definitions are a fundamental part of J2EE administration. Application logic can vary depending on individual business requirements, and there are several resource types that can be used by an application. The following table shows a list of some of the most commonly used resource types:

Resource types Description
JDBC (Java database connectivity) Used to define providers and data sources.
URL providers Used to define end-points for external services, for example, web services.
JMS providers Used to define messaging configurations for Java Message Service, Message Queuing (MQ) connection factories and queue destinations, and so on.
Mail providers Enable applications to send and receive mail, typically using the SMTP (Simple Mail Transfer Protocol).

JNDI

The Java Naming and Directory Interface (JNDI) is employed to make applications more portable. JNDI is essentially an API for a directory service, which allows Java applications to look up data and objects via a name. Naming operations, such as lookups and binds, are performed on contexts. All naming operations begin with obtaining an initial context. You can view the initial context as a starting point in the namespace. Applications use JNDI lookups to find a resource using a known naming convention. You can override the resource the application is actually connecting to without requiring a reconfiguration or code change in the application. This level of abstraction using JNDI is fundamental and required for the proper use of WAS by applications.

Application file types

There are four main file types we work with in Java applications. An explanation of these file types is shown in the following table:

File Type Description
JAR file A JAR file (or Java ARchive) is used for organizing many files into one and employ the .jar file extension.

The actual internal physical layout is much like a ZIP file. A JAR is generally used to distribute Java classes and associated metadata. In JEE applications, the JAR file often contains utility code, shared libraries, and EJBs. An EJB is a server-side model that encapsulates the business logic of an application and is one of the several Java APIs in the Java Platform, Enterprise Edition with its own specification. You can visit HTTP://JAVA.SUN.COM/PRODUCTS/EJB/ for information on EJBs.

RAR file
A RAR (Resource Adapter Archive) is a special Java archive (JAR) file that is used to package a resource adapter for the Java 2 Connector (J2C) architecture and has the .rar file extension.
Stored in a RAR file, a resource adapter may be deployed on any JEE server, much like the EAR file of a JEE application. A RAR file may be contained in an EAR file or it may exist as a separate file. WebSphere supports both.
A resource adapter is analogous to a JDBC driver. Both provide a standard API through which an application can access a resource that is outside the JEE server. For a resource adapter, the outside resource is an EIS (Enterprise Information system ) and allows a standard way for EIS vendor’s software to be integrated with JEE applications; for a JDBC driver, it is aDBMS (Database Management System). Resource adapters and JDBC drivers are rarely created by application developers. In most cases, both types of software are built by vendors who sell products such as tools, servers, or integration software.
WAR file WAR file A WAR file (Web Application) is essentially a JAR file used to encapsulate a collection of JavaServer Pages (JSP), Servlets, Java classes, HTML, and other related files, which may include XML and other file types depending on the web technology used. For information on JSP and Servlets, you can visit HTTP://JAVA.SUN.COM/PRODUCTS/JSP/.

Servlet can support dynamic web page content; they provide dynamic server-side processing and can connect to databases.

JavaServer Pages (JSP) files can be used to separate HTML code from the business logic in web pages. Essentially, they too can generate dynamic pages; however, they employ Java beans (classes), which contain specific detailed server-side logic.

A WAR file also has its own deployment descriptor called web.xml, which is used to configure the WAR file and can contain instructions for resource mapping and security.

EAR file An Enterprise Archive file represents a JEE application that can be deployed in a WebSphere Application Server. EAR files are standard Java archive files (JAR) and have the file extension .ear. An EAR file can consist of the following:
One or more web modules packaged in WAR files.

One or more EJB modules packaged in JAR files.

One or more application client modules.

Additional JAR files required by the application.

Any combination of the above.

The modules that make up the EAR file are, themselves, packaged in archive files specific to their types. For example, a web module contains web archive files and an EJB module contains Java archive files. EAR files also contain a deployment descriptor (an XML file called application.xml) that describes the contents of the application and contains instructions for the entire application, such as security settings to be used in the runtime environment.

WebSphere Architecture Overview

The next view to be presented is that of the WebSphere Application Server product architecture. In a nutshell, the WebSphere Application Server product is an implementation of the J2EE set of specifications with some added functionality only found in this IBM product. Therefore, as opposed to the previous section, this view is unique to WebSphere.

Consequently, this section briefly presents the salient components of the J2EE technologies and their relation to each other from the functional and architectural point of view. Furthermore, emphasis will be placed on aspects that affect or may be affected by security considerations.

Explore IBM WAS Sample Resumes! Download & Edit for Free…!Download Now!

WebSphere Application Server simplified architecture

The following diagram depicts a simplified version of the WebSphere Application Server architecture. It presents the application server in the context of a WebSphere node. The application server is the implementation of a JVM. The JVM is made up of various components and at the same time, the JVM interacts with several external components that make up the WebSphere node. So, the diagram presents two major components of a WebSphere environment. On the one hand, the JVM is represented by the parallelogram (purple ) labeled Application Server. On the other hand, a larger parallelogram (teal) labeled node represents the WebSphere node.

Keep in mind that the simplification to the architecture has been done to concentrate on how it relates to application hosting in a secure environment.

WebSphere node component

The node component of this simplified architecture occupies itself with administrative and thus security aspects between the WebSphere environment and the infrastructure. In the previous diagram, three components can be observed. The first component is the node agent; represented by the small parallelogram labeled Node agent. Notice that the node agent in itself is implemented by a specialized JVM, containing the components required to efficiently perform administrative tasks, which will include security related tasks. The node agent will interact with WebSphere environment administrative components externals to the node (and not included in the diagram). The chief among those external WebSphere components is the Deployment Manager. One of the responsibilities of the node agent as it pertains to the node and thus, to the application server JVM, is to maintain updated and valid copies of the node configuration repository. Such a repository may include information dealing with security domain information, either inherited from the WebSphere cell global security or customized for the node, represented by the parallelogram (black) labeled Local Security Domain.

WebSphere JVM component

The second major component of this simplified architecture is the implementation of a JVM. It is represented in the diagram by a large parallelogram (purple) labeled Application Server. A WebSphere JVM is made of, among other components, several containers such as the Web and EJB containers. Containers, on top of hosting instantiations of Java classes such as servlets and beans, that is, offering the runtime environment for those classes to execute, deal with security aspects of the execution. For instance, a Web Container may, given the appropriate settings, oversee that hosted resources only execute if the principal making the request has the required proof that entitles such principal of receiving the result of said request.

In addition to containers, a WebSphere JVM may also instantiate a service integration bus (SIB) if a hosted application makes use of the JVM messaging engine. In the diagram, the arrow (brown) labeled SIB represents the bus. Finally, the other JVM components included in this simplified architecture are the administrative component and the JVM security mechanism. This mechanism will interact with the containers to ensure that security is propagated to the classes executing in the said containers.

From this discussion, it can be extrapolated that each vendor has certain leniency as to the actual implementation of Sun’s JVM. IBM is not an exception to this practice. If you wish to find out more about the particulars of the IBM JVM implementation for WebSphere please refer to the Information Center article “Specifications and API” (HTTP://PUBLIB.BOULDER.IBM.COM/INFOCENTER/WASINFO/V7R0/INDEX.JSP?TOPIC=/COM.IBM.WEBSPHERE.ND.DOC/INFO/AE/AE/ROVR_SPECS.HTML). In that article you will find out which Java specifications and application programming interfaces are implemented as well as the version each implements. This information is presented in a neat table that helps you compare each specification and API version to earlier editions of the WebSphere Application Server product (that is, 5.1, 6.0 and 6.1).

Using the WebSphere architecture view

The main benefit of analyzing your WebSphere environment using this view is that it will provide you with the vocabulary to better understand the needs of application developers and architects and, equally important, to communicate back to them the special features the WebSphere environment may offer them as well as any possible restrictions imposed by security or other infrastructure characteristics.

An additional benefit provided by this view is that it offers alternatives to troubleshooting application related issues, as you will become more familiar with which JVM components are being used as the runtime environment for a given enterprise application.

WebSphere technology stack view

Finally, the third view covered in this chapter is that of the WebSphere environment technology stack. In other words, this view presents which technologies from the operating system to the WebSphere Application product are involved, highlighting the aspects related to security. This view is broken down into three categories, which are described in the following paragraphs. The stack and its categories are depicted in the diagram shown in the next sub-section.

OS platform security

At the bottom of the stack there are the primitive technologies. The term primitive in this context does not carry the meaning of backward, but rather that of foundation technologies. In the following diagram, the rectangular (bright green) area located at the bottom of the stack represents the OS platform layer.

In this layer, the presence of the underlying operating system can be observed. In the end, it is the responsibility of the OS to provide the low-level resources needed by the WebSphere environment. Furthermore, it is also its responsibility to enforce any security policies required on such resources. Two of the more prominent OS components as they relate to a WebSphere environment are the file system and the networking infrastructure. Both the file systems and the networking infrastructure are handlers of special resources.

Java technology security

The next layer in this architecture is that of the Java technology. This layer comprehends the core Java technologies and APIs used within the WebSphere environment. In the previous diagram, the layer is represented by the rectangle (teal) in the middle of the stack.

The layer is further broken down into three distinct groups among the Java stack. At the bottom sit the foundational bricks. The Java Virtual Machine and the Java Language Specification. The JVM is the enabler whereas the Language Specification lays down basic and general rules that must obeyed by the entities that will populate the JVM.

The middle brick of this layer is that of Java 2 Security. It includes more sophisticated rules that will enable entities in the JVM to achieve more complex behaviors in harmony with the rest of the inhabitants.

Finally, at the top of this layer there is the J2EE Security brick. It brings additional enablers to the JVM and rules that must be followed by the entities that populate these remote areas of the Java galaxy.

WebSphere security

At the top of the technology stack, sits the WebSphere security layer. It builds up on the previous layers and brings on board open and proprietary security bricks to supplement the Java foundation.

In other words, the WebSphere high-level security layer offers conduits using a number of technologies such as LTPA, Kerberos, and so on, that make the WebSphere environment more robust. This layer is represented in the previous diagram by the rectangle (maroon) located at the top.

In general, the number of technologies supported by this layer as well as the implementation version of such technologies is one of the aspects that make up each new WebSphere release.

Using the technology stack view

One of the main benefits of the technology stack view is that it helps WebSphere practitioners involved in various roles to map the various technologies included in this stack to the functional blocks that make up the other two views. Some practitioners will benefit by selecting the most appropriate subset among the classes offered by the WebSphere environment to implement a required functionality. Other practitioners will benefit by integrating into the WebSphere environment the best infrastructure component that will help to enable a piece of functionality required by a hosted application.

Apache Hadoop Ecosystem

Hadoop EcoSystem

1. Large data on the web.
2. Nutch built to crawl this web data.
3. Large volume of data had to saved – HDFS introduced.
4. How to use this data? Report.
5. MapReduce Framework built for coding & running analytics.
6. Unstructured data – Weblogs, click streams, Apache logs.
Server logs – fuse, webDAV, chukwa, flume and scribe.
7. Sqoop and Hiho for loading data into HDFS – RDBMS data.
8. High level interfaces required over low level map reduce programming – Hive, Pig, Jaql.

Different EcoSystems of Hadoop

Hadoop is best known for MapReduce and it’s distributed file system ( HDFS, renamed from NDFS).

Note:- NDFS is also used for a projects that fall under the umbrella of infrastructure for distributed computing and large scala data processing).

1. HDFS
2. MapReduce
3. Hadoop Streaming
4. Hive and Hue
5. Pig
6. Sqoop
7. Oozie
8. HBase
9. Flume
10. Mahout
11. Fuse
12. Zookeeper

List of Other Big Data

9. BI Tools with advanced UI Reporting.
10. Workflow tools over Map-Reduce processes and high level languages – Oozie.
11. Monitor & manage haddop, run Jobs/hive, view HDFS – high level view – HUE, karmasphere, eclipse plug in, cacti, ganglia.
12. Support frameworks – Avro (Serialization), Zookeeper (coordination).
13. More high level interfaces/uses – Mahout, Elastic MapReduce.
14. OLTP also possible in HBase.
15. Lucene is a text search engine library written in Java.

Reasons to Learn to Hadoop & Hadoop Administration

For any company be it a small scale or a large scale, data generation and its organization in a systematic manner is extremely important. Thanks to the world of digitalization, there is no doubt that data storage facility is available in many ways. Talking of which, Hadoop is a perfect example of the same. The demand for Hadoop in today’s time has increased quite a lot because; it is an affordable option of course and of course, comes with many other benefits as well. This concept is trending since it offers organizations to make the use of the right use of their investment know about the trending patterns of the customers and also come up with the personalized targeting and so on. This is the main reason, why learning Hadoop can provide fruitful to your career since it is a demanding concept.

Reason to learn Hadoop

In today’s time, to process Zettabytes which is the unstructured big data has started gathering more demand for the experts who have good skill in Hadoop. It offers the most structural way to work on the unstructured data without any difficulty. It can be quite complex in entire and also challenging for you to learn Hadoop at the beginning. But there is no doubt that with professional training in Hadoop and completing its certification, you can meet the benefits without any hassle.

1. A better Career scope:

Since, this is one of the best career growing opportunities you can choose, you need to understand that it is not so easy to achieve the knowledge in the same. Generally many IT companies look for the professional expert who with years of experience in the DWK or business knowledge is given preference. But if these candidates have the good understanding about Hadoop ecosystems such as Cassandra, NoSQL databases, VisualPath, MapReduce, and MongoDB to name a few, have a good scope of pay. There are so many career opportunities in which you can make your way in different business industries such as healthcare, retailers, sports, energy, media, and utilities and earn a good name. Once you complete the certification course in the same, you can have a better scope in joining positions like:

• Big data architect
• Hadoop Developer
• Data Scientist
•  Hadoop Administrator
•  Data Analysts

2. A move to a Big company:

You might have started as a fresher in a small company and may not be that happy as you are actually expecting but the fact is since, Hadoop is already a strong pillar of IT companies like Apple, Facebook, and Yahoo, there is no doubt that learning it would give you better opportunity to grow. Since it an open source framework, it is quite easy and cost-friendly solution for the company to afford and that is why they already prefer hiring the candidate who has knowledge in this field. Since it is one of the advanced yet well-loved distributed computing technologies with minimal cost, there is no doubt that employer considering a person why Hadoop expertise will give you a direct entry to the better payroll company.

3. Powerful yet versatile option:

Hadoop is one of the most powerful yet the versatile framework source to opt. It is used for warehousing data, data recovery, and even the predictive analytics. It is a must-have for the huge companies since; it helps in creating the cornerstone for the flexible future data platforms which is required in the era where the customer is the kind. It is powerful since it helps you to store the huge amount of data without any problem.

4. An approach with Statistical innovations:

There is always a new development in the software that takes place and Hadoop is one such language that each time with its upgrade, it gets evolved into something better. With options like big memory and ff, it is now quite possible for you to create the dataset which could be stronger than memory and if you have synchronization made along with Hadoop it can be an advantageous solution.

5. Pace up your Career in growing Big Data Market:

There are so many travel websites that you earlier might have had found to understand about the destination and plan a trip accordingly. Now there are many multination companies and even the one that is into auction management that a user can easily find the instant information of the product. All this is possible solely because of Hadoop. It is an exciting part of the Big data concept which has made easy for people to deal with the challenges. If you want to improve your language scope, there is no doubt that improving your knowledge in Hadoop can be a beneficial option since it offers better ways to explore and innovate new things.

6. Better Availability:

You need to work on Hadoop keeping the new user in mind. You need to understand if you make software easy to install and run, it gives a user the independence to sit and understand about it everywhere. However, Hadoop is one such open source networking that makes it easy for the user to lean and even follow the changes if required. Thus, with the convenience, a user can actually opt for such source without any hassle. Since, along with big companies, it is quite in demand amongst the users as well, there is no doubt that it is available in the flexible market.

7. A good pay:

Of course, in today’s time, no one would actually want to compromise with the pay at any cost. And if you are considering Big Data as an option to grow your career, there is no doubt that Hadoop offers the pool of talent and scope for which you can easily get a good pay. With the increasing demand for the professionals with such skill set, there is no doubt that they are getting paid better as a part of retainership. The people with good skills and abilities are of course more in demand and those who have knowledge of Hadoop would, of course, make sure company gets deep analytical expertise solution on the same. This is the main reason why such professionals seek for high salaries and which of course is expected.

8. Say Goodbye to Complexity:

Yes, you have heard it right! Complex projects require more time and the software that offer easy solution are more in demand. Talking of which, programming languages like Hadoop and Python are the one that handles Big data with many conveniences. It allows the user an option to rely on different factors. This way, when it comes to grasping the data, it shall not be difficult for you as an employee as well. This is the main time, why once you understand how it works; it can set your life in all the possible ways.  Understand that there is no hand and fast rule for learning about Hadoop. If you are willing to start your career in such option then irrespective of the background, you can get trained on this platform.

9. A Maturing technology:

Over the last few years, Hadoop has already taken a maximum portion of Big data and of course in far better manner. Now if you have a closer look, you will understand that Hadoop has already partnered with HortonWorks, Tableau, MapR, and even BI experts to name a few. As per the survey, it has been noted that over the past few years, there has been a rapid gaining momentum and best standards are coming up and the credit for the same goes to Hadoop.  Over other technologies, there is no doubt that this is one maturing technology that has more scope for professionals to grow at the same time learn new concepts. Besides it also helps you learn parallel processing, distributed computerizing and many more things which eventually if you learn individually, it would take more time. But learning Hadoop would cover everything. This way, you can become a bridge between the increasing demand and the expertise solution available.

10. Better Security:

If you are already a Java developer then it is the perfect opportunity to grab the deal in a secured career. No doubt that this type of framework is extremely easy to learn. But if you are already a Java developer, then that professional who wishes to switch from Java to Hadoop can easily do so since the script of MapReduce is already written in Java. This means you can certainly expect the most lucrative package with better security.

Understand the fact that there are few tools such as Pig and hive that are made on the top of Hadoop and have their own language so that data clusters could work in a smooth manner. You certainly get a freedom to write your own MapReduce code with the help of C or even Python But it, of course, requires the support reading from the standard output which is none other than Hadoop. This is the main reason why learning Hadoop can be a great scope for you in the near future. Make sure you understand each of the context associated with it thoroughly.

Creating your own calculations In SAP BO

So far, we learned about the basic functionality of SAP BUSINESSOBJECTS ANALYSIS, EDITION FOR OLAPand we learned already some of the advanced capabilities. In this section, we will take a look, on how we can create our own custom calculations as part of our workspace.
Let us start with the very basic workspace showing all the key figures in the columns and the characteristics Country and Region in the Rows.

Learn how to use SAP BO, from beginner basics to advanced techniques, with online video tutorials taught by industry experts. Enroll for Free SAP BO Training Demo!
  1. Open the BI Launchpad via Start > All Programs > SAP BUSINESSOBJECTS BI Platform 4.0 > SAP BusinessObjects BI Platform > SAP BusinessObjects BI Platform Java BI Launchpad.
  2. Logon with your SAP credentials and the SAP authentication for your SAP NETWEAVER BW system.
  3. Select the menu Applications.
  4. Select Analysis, Edition for OLAP. In the next step, you are presented with the list of available OLAP Connections.
  5. Select the connection we created previously pointing to your SAP NetWeaver BW system.
  6. Click Next. You are now receiving the list of available BEx queries. You can also use the tab Find to search for specific queries.
  7. Select the BEx Query we created previously and click OK.
  8. Remove all characteristics from the Rows.
  9. Add characteristic Country to the Rows.
  10. Add characteristic Region to the Rows.
  11. Use a right click on the Key Figures in the Column area.
  12. Select the menu Filter by Member (see Figure 5.21).


Figure 5.21 Menu Filter by Member

13. Ensure that the key figures Net Value, Product Costs, and Transport Costs are selected.
14. Click OK (see Figure 5.22).


Figure 5.22 Analysis Workspace

There are multiple ways to add calculations:
You can select those key figures that are part of the calculation in the cross tab and then use the Calculations menu from the tab Analyze.
You can select a single key figure and then use the option to create a dynamic calculation using the menu Calculations from the tab Analyze.
You can use the option to create a custom calculation available in the menu Calculations on the tab Analyze.
In the next few steps, we will use our cross tab, that we just created, to explore these three options.

15. In the cross tab, select the column header for the key figure Transport Costs.
16. Press (and keep pressed) the CTRL button on the keyboard.
17. In the cross tab, select the column header for the key figure Product Costs (see Figure 5.23).


Figure 5.23 Creating Calculations

18. Navigate to the tab Analyze.
19.Select the menu Calculations > Add (see Figure 5.24).


Figure 5.24 Adding Calculations

Based on the order of selecting the key figures, SAP BusinessObjects Analysis, edition for OLAP creates a new column with the calculation: Transport Costs + Product Costs (see Figure 5.25).


Figure 5.25 Workspace with calculation

You can now change or delete the calculation using the menu Calculations on the tab Analyze or using the Screenshot_559 symbol in the column header for the new calculation.

20. Navigate to the tab Analyze.
21. Select the menu Calculations.
22. Navigate to the menu Calculations > Transport Costs + Product Costs.
23. Select the option Edit (see Figure 5.26).


Figure 5.26 Edit Calculations

24. The calculation is shown on the left hand side and you can now change and customize the new column (see Figure 5.27)

.
Figure 5.27 Calculation Details

25. Enter “Total Costs” for the Name.
26. For the option Place After use the Screenshot_560 symbol and place the calculation after Net Value.
27. Click Validate. The Status will display if your calculation is valid or not.
28. Click OK.
29. Use a right click on the Key Figures in the Columns area.
30. Select the menu Filter by Member.
31. Remove the key figures Transport Costs and Product Costs from the cross tab.
32. Click OK.
33. In the cross tab, select the column header for the key figure Total Costs.
34. Press (and keep pressed) the CTRL button on the keyboard.
35. In the cross tab, select the column header for the key figure Net Value.
36. Navigate to the tab Analyze.
37. Select the menu Calculations > Percentage Share.
Based on the order, we used to select the columns, the newly added calculation will show the percentage share of the Total Costs in relation to the Net Value.
38. In the cross tab, select the column header for the key figure Net Value.
39. Navigate to the tab Analyze.
40. Select the menu Calculations > Dynamic Calculation (see Figure 5.28).


Figure 5.28 Dynamic Calculations

Because now we only selected a single key figure, the option available are different and we cannot use the standard calculation such as Add or Percentage share, but we can still leverage the Dynamic Calculation.

41. Use the option Rank Number (see Figure 5.29).

Figure 5.29 Workspace with Dynamic Calculation

A new key figure displaying the Rank of the key figure Net Value is being added to your cross tab. You can use the Dynamic Calculation option to add columns such as a Rank or an Average to your cross tab.

42. Navigate to the tab Analyze.
43. Select the menu Calculations > Custom Calculation.


Figure 5.30 Calculation Details

Using the custom calculation option (see Figure 5.30), you can create a calculation using a large list of formulas.
In this section, we reviewed the different options to add a calculation to your cross tab. In the next section, we will use BEx variables in our workspace with SAP BusinessObjects Analysis, edition for OLAP.

Overview of IBM WebSphere Message Queue

WebSphere MQ Tutorials

Welcome to the WebSphere MQ Tutorials. The intent of these tutorials is to provide in depth understanding of WebSphere MQ software. In addition to WebSphere MQ Tutorials, you can find interview questions, how to tutorials and issues and their resolutions of WebSphere MQ Product.

Knowledge Base

  • Publish Subscribe In WebSphere MQ Series
  • Deep Dive Into Message Channels In WebSphere MQ Series
  • Understanding Channels In WebSphere MQ
  • Query Recovery In WebSphere MQ
  • Working With Commands In WebSphere MQ Series
  • How To Create Model And Dead Letter Queue In WebSphere MQ Series
  • How To Create Alias Queue in WebSphere MQ Series
  • WebSphere MQ Interview Questions

What is WebSphere MQ?

This document contains step by step technical information to learn MQ series, by using this one can start working with control and IBM WebSphere MQ script commands through the Windows Explorer and UNIX/AIX Command line interface.

MQSeries runs on a variety of platforms. The MQSeries products enable programs to Communicate with each other across a network of unlike components, such as processors, subsystems, operating systems and communication protocols. MQSeries programs use a consistent application program interface (API) across all platforms.

MQSeries can be configured to provide assured delivery of the messages. Assured delivery means that even if the hardware or software platform crashes the messages within the system will still be delivered, once the platforms are brought back up. On a typical target system MQSeries consists of a queue manager and a number of queues and channels.

Message in MQ

An WebSphere MQ Series message is simply a collection of data sent by one program and intended for another program. The message consists of control information and application specific data. The control information is required in order to route the message between the programs to some extent. A message can be classed as persistent or non- persistent. A persistent message will survive a software or hardware crash / reboot, once communicated to a queue manager, whereas a non-persistent message will not survive. Persistent messages are used as part of the implementation of the assured delivery service supported by MQSeries.

MQ Queue Manager

It is the queue manager that provides the queuing services to the application programs. A queue manager in MQ Series also provides additional functions so that administrators can create new queues, alter the properties of existing queues and control the operation of the queue manager. Many applications can make use of the queue manager’s facilities at the same time, and they can be completely unrelated. It is possible to connect queue managers on different platforms together, this is achieved via a mechanism called channels.

Websphere Message Queues

Queues are named message repositories upon which messages accumulate until they are retrieved by programs that service those queues. WebSphere MQ Queues reside in, and are managed by, a queue manager. Programs access queues via the services provided by the queue manager. They can open a queue, put messages on it, get messages from it, and close the queue. It is also possible to programmatically set, and inquire about, the attributes of queues. Queues are either defined as local or remote. Local queues allow programs to both put messages on, and get messages off, while remote queues only allow programs to put messages on. Remote queues are used in order to provide put access to queues that are local to another platform. Any message put onto a remote queue is automatically routed to the associated platform and local queue by the queue manager via the channels mechanism.

Channels

Channels are named links between platforms across which messages are transmitted. On the source platform the channel would be defined as a sender and on the destination platform as a receiver. It is the sender channel definition that contains the connectivity information, such as the destination platform’s name or IP address. MQ Series Channels must have the same name on both the source and destination platform.

What are the different OGG Initial load methods available?

OGG has 2 functionalities, one it is used for Online data Replication and second for Initial Loading.

If you are replicating data between 2 homogeneous databases then the best method is to use database specific method (Exp/Imp, RMAN, Transportable tablespaces, Physical Standby and so on). Database specific methods are usually faster than the other methods.
—If you are replicating data between 2 heterogeneous databases or your replicat involves complex transformations, then the database specific method can’t be used. In those cases you can always use Oracle GoldenGate to perform initial load.
Within Oracle GoldenGate you have 4 different ways to perform initial load.
1. Direct Load – Faster but doesn’t support LOB data types (12c include support for LOB)
2. Direct Bulk Load – Uses SQL*LOAD API for Oracle and SSIS for MS SQL SERVER
3. File to replicat – Fast but the rmtfile limit is 2GB. If the table can’t be fit in 1 rmtfile you can use maxfiles but the replicat need to be registered on the target OGG home to read the rmtfiles from source.
4. File to Database utility – depending on the target database, use SQL*LOAD for Oracle and SSIS for MS SQL SERVER and so on.

Oracle GoldenGate initial loading reads data directly from the source database tables without locking them. So you don’t need downtime but it will use database resources and can cause performance issues. Take extra precaution to perform the initial load during the non-peak time so that you don’t run into resource contention.

Guide for Installing JIRA Applications on Windows

In this guide we’ll run you through introducing a JIRA application in a production environment, with an outer database, utilizing the Windows installer.

This is the most direct approach to get your production site up and running on a Windows server.

There ares some other ways also to install JIRA:
>> Evaluation – Get your free trial up and running in no time.
>> Zip – Install JIRA manually from a zip file.
>> Linux – Install JIRA on a Linux operating system

Install a JIRA application

Step-1: Download JIRA

Let’s Download the installer for your OS from- https://www.atlassian.com/software/jira/download

Step-2: Run the Installer

1. Now Run the installer. A Windows administrator account is recommend to use.


2. Follow the prompts to install JIRA. You’ll be asked for the following info while installing:

1. Destination Directory – Where the JIRA will be installed.

Destination Directory JIRA
2. Home directory – Where JIRA data like logs, search indexes and files will be stored.


3. TCP ports – Where the HTTP connector port and control port JIRA will run on. Stick with the default unless you’re running another application on the same port.

Configure TCP Ports JIRA
4. Install as service – This option is only available, when you run installer as administrator.

3. Once the installation completes JIRA will start up in your browser.

How to Set up your JIRA application

Step-3: Choose set up method

Now Choose I’ll set it up myself.

Step-4: Connect to your database

1. You’re not yet done so, it’s time to create your database. Check the ‘Before you begin’ section of this page for details.
2. Choose My own database.
3. Choose your type of database, then enter the details for your database.

> Using a standard JDBC database connection JIRA connects to your database. As Connection pooling is handled within JIRA, you can change this using JIRA configuration tool later.

Incase, if you’re using Oracle or MySQL there’s an extra step: 

Download and extract the proper Database JDBC Drivers.
* Before continuing with the setup wizard, Pull the JAR file into your /lib folder.

> In the setup wizard:

* Driver Class Name – This is the Java class name for your database driver. If you’re not sure, check once again the documentation for your database.
* Database URL – The JDBC URL for your database. Check once again the documentation for your database, if yo are not sure.
* Username and Password – A valid username and password that JIRA can use to access your database.

Step-5: Set Application Properties

1. Lets give a neme for your JIRA site.
2. Choose wether your site should be private or if anyone can sign up (you can change this later).
3. Enter your base URL – this is the address people will use to access your JIRA site.

Step-6: Enter your license

Follow the prompts to log in to  my.atlassian.com to retrieve your license, or enter a license key.

Step-7: Create your administrator account

Now create admin account and enter details for the administrator account. You can add more administrators once set up is complete.

Step-8: Set up email notifications

To get email notifications from JIRA for issue changes, you have to enter details of your mail server.

Step-9: Start using JIRA

That’s it! Now your JIRA site is accessible from your base URL or a URL as follows:
JIRA Base URL

Note:
Here’s a few tips that will helps you to get your team up and running:
> Add and invite users to get your team on board, or configure user directories for slightly bigger teams.
> Create your first project to have something to work on.
> Configure SSL or HTTPS to keep JIRA and your team more secure.

Important Things that keep in mind while installing JIRA:

Before you installing JIRA, lets know some questions that need to think:

1. Check the Supported platforms page for the different JIRA Versions that you want to install. This will give you information on supported OS’s, databases and browsers.

Note:
1. There is no support installing JIRA on OSX or mac OS.
2. The JIRA installer includes Java (JRE) & Tomcat, so there no need to install these seperately.

2. Running JIRA as a service in Windows implies that your JIRA application will consequently start up when Windows is started.

Note:
On the off chance that you run JIRA as an service: 
> You should run the installer as administrator to have the ability to install JIRA as a service.
> The JIRA administration will be keep running as the Windows “Framework” client account.
> We firmly suggest creating a committed client account (e.g. with username ‘jira’) on Windows for running JIRA.

In the event that you pick not to run JIRA as a service: 
> You will start and stop JIRA by running the start-jira.bat document in your JIRA installation directory.
> JIRA will be keep running as the Windows client account that was utilized to install JIRA, or you can keep running as a dedicated client.
> JIRA should be restarted manually if your server is restarted.

3. To run JIRA production you’ll require an external DB. Check the Supported stages page for the form you’re installing for the rundown of databases we as of now support. On the off chance that you don’t as of now have a database, PostgreSQL is free, simple to set up and has been broadly tried with JIRA.

Note:
> Set up your database before you start. Well you will get guides for PostgreSQL, Oracle, MySQL, and SQL Server.
> Use UTF-8 character encoding.
> In case you want to use Oracle or MySQL you’ll have to download the driver for your database.
> The embedded H2 database can be utilized for assessing JIRA, yet you’ll have to move to another database before running production. You may think that its less demanding to utilize external database from the start.

4. You’ll require a valid JIRA Software Server, JIRA Core Server or JIRA Service Desk Server license to during JIRA.

Note:
> If you don’t purchase a JIRA application license you’ll have the capacity to make an assessment license amid setup.
> Incase, if you have a license key alreday you’ll be incited to sign in to my.atlassian.com to recover it, or you can enter the key manually during setup.
> In case you’re migrating from JIRA Cloud, you’ll require another new license.