Monday, December 4, 2017

20 Real Time Blue Prism Interview Questions and An

Here are Real Time 20 BLUE PRISM Interview questions and their answers are given just below to them. These sample RPA BLUE PRISM questions are framed by experts. who train for Learn BLUE PRISM Online to give you an idea of type of questions which may be asked in interview. We have taken full care to give correct answers for all the questions. Do comment your thoughts. Happy Job Hunting!

1. What is Robotic Automation?
Robotic automation is a type of automation where a machine or computer mimics a human’s action in completing rules based tasks.

2. What is Blue Prism’s Robotic Automation?
Robotic Automation implies process Automation’s where computer software drives existing enterprise application software in the same way that a user does. Automation is a gadget or stage that operates other application software through the existing application UI.
Blue Prism RPA Interview Questions
Blue Prism RPA Interview Questions
3. Is Robotic Automation like screen scraping or macros?
No, managerial Robotic Automation is a generation on from old technologies like screen scratching or macros.
The major differences are:
Robots are universal application orchestrators – any application that can be used by a man can be used by a present day robot, whether mainframe, bespoke application, legacy, web service enabled or even a close 3rd party API hosted service.
Robots assemble procedural knowledge which after some time joins with a shared library that can re-used by some other robot or device.
Applications are “read” by the robot, either through submitted APIs where they exist, through the OS before application appear, or through the screen with respect to the native application. In this last case the modern robot “reads” an application screen in context and in the same way a user does. As part of the robot training it is shown how to read the application’s display much like a user is shown.

4. Is Blue Prism an RPA Tool?
Yes, Blue Prism is an RPA Tool.

5. What systems can Blue Prism robotically integrate?
Blue Prism has incorporated several years of experience of integration and various technologies into its software. The technologies used are secure, reliable and robust. Instead of creating new adaptors for each unique application we have developed technology adaptors for all the technologies employed at the presentation layer, Windows, Web, Java, Green Screen/Mainframe and even Citrix.
This consolidated with a broad assortment of dedicated tools that have been developed means that we are confident in being able to link any system with the click of a button. This proven application orchestration ability ensures that new processes can be quickly designed, built and tested without any impact on existing systems.

6. What hardware infrastructure do I need to run Blue Prism’s Robotic Automation Platform?
Blue Prism has been uniquely designed for flexibility and to meet the most robust IT standards for IT operational integrity, security and sup portability. The software can be deployed either as a front office or back office process, running quite happily on a standard desktop in the front office or on any scale of systems for back office processing.

7. What is process Studio?
A Blue Prism Process is created as a diagram that looks like a business flow diagram. Processes are created in a zone of Blue Prism named Process Studio which looks similar to other process modeling applications) and uses standard flow diagram symbols and notation.

8. Is Blue Prism’s Robotic Automation Platform secure and auditable?
Security and auditability are consolidated into the Blue Prism robotic automation platform at various levels. The runtime environment is totally separate to the process editing environment.
Approvals to design, create, edit and run processes and business objects are specific to each authorized user.
A full audit trail of changes to any process is kept, and comparisons of the before and after effect of changes are rovided.
The log created at run-time for each process provides a detailed, time-stamped history of every action and decision taken within an automated process.
Our clients tend to find that running a process with Blue Prism gives them a lot more control than a manual process, and from a compliance point of view assures that processes are run consistently, in line with the process definition.

9. How do I get started on delivering processes using Blue Prism?
Blue Prism acclaims a phased approach to getting started as the Operational Agility framework is very scalable. It is typical to target the configuration of between 1 and 10 processes initially with a rolling program of processes being introduced once the outline is established.

10. What support do I need from Blue Prism Professional Services?
It genuinely depends on the capacities you already have in house and the way you wish to work. Blue Prism can give a full extent of services from basic training, reinforce and mentoring with a view to quickly getting your team independently delivering ongoing automations, right through to a full turnkey package where we will take responsibility for delivering business benefit within agreed service levels.

11. Why Blue Prism?
Reasons:

  1. Supports both internal and external Encryption/Decryption Keys
  2. Automation process can be designed within IT Governance
  3. High level Robustness because of .NET customized code within the process automation
  4. Provides Audit Logs enabling

12. How much does robotic automation cost?
A “fully loaded” office robot is around a 1/3rd the cost of universally sourced agents. The flexibility and ease of disposition means that this comparison is easy to maintain and judge the nest method to a given tasks.

13. What is the difference between thin client and thick client?
Thin client: It is any application that we cannot get the quality properties while spying using any RPA tools. For e.g.  Citrix or any virtual environment.
Thick client: It is any application that we get pretty handful of attribute properties using RPA tools e.g. calculator, Internet explorer

14. Does blue prism require coding?
Blue Prism’s digital workforce is fabricated, managed and asserted by the user or customer, spanning operations and technology, sticking to an enterprise-wide robotic operating model. It is code-free and can computerize any software.
The digital workforce can be applied to automate processes in any department where managerial or administrative work is performed over an organization.

15. What are the differences between Blue Prism and UiPath?
UiPath and Blue Prism both the tools have their own software and they are very good. UI and BP both have graphic process designers for developing the solutions.
Differences:
In terms of programming languages
Blue Prism Uses C# for coding
UiPath uses VB for coding
In terms of Control Room/Dashboard
UiPath control room – The Orchestrator – is web based; you can access it from the browser or mobile.
BP has client based servers, accessible only through their apps.
In terms of cost and uses

UiPath

  1. Lower cost of development
  2. Easier to learn and operate
  3. You can learn by your self
  4. Study materials are easily available on internet

 Blue Prism

  • Good for mass scale deployment of large number of rob

Friday, June 16, 2017

TOP Apache Cassandra Multiple choice Questions and Answers pdf

41). I have a row or key cache hit rate of 0.XX123456789 reported by JMX. Is that XX% or 0.XX% ?
XX%

42). What happens to existing data in my cluster when I add new nodes?
When a new nodes joins a cluster, it will automatically contact the other nodes in the cluster and copy the right data to itself.

43). What are "Seed Nodes" in Cassandra?
A seed node in Cassandra is a node that is contacted by other nodes when they first start up and join the cluster. A cluster can have multiple seed nodes. Seed node helps the process of bootstrapping for a new node joining a cluster. Its recommended to use the 2 seed node per data center.

44). When to avoid secondary indexes?
Try not using secondary indexes on columns contain a high count of unique values and that will produce few results.

45). What are the befefits of NoSQL over relational database?
NoSQL overcome the weaknesses that the relational data model does not address well, which are as follows:
Huge volume of sructured, semi-structured, and unstructured data
Flexible data model(schema) that is easy to change
Scalability and performance for web-scale applications
Lower cost
Impedance mismatch between the relational data model and object-oriented programming
Built-in replication
Support for agile software development

46). What ports does Cassandra use?
By default, Cassandra uses 7000 for cluster communication, 9160 for clients (Thrift), and 8080 for JMX. These are all editable in the configuration file or bin/cassandra.in.sh (for JVM options). All ports are TCP.

47). What do you understand by High availability?
A high availability system is the one that is ready to serve any request at any time. High avaliability is usually achieved by adding redundancies. So, if one part fails, the other part of the system can serve the request. To a client, it seems as if everything worked fine.

48). How Cassandra provide High availability feature?
Cassandra is a robust software. Nodes joining and leaving are automatically taken care of. With proper settings, Cassandra can be made failure resistant. That means that if some of the servers fail, the data loss will be zero. So, you can just deploy Cassandra over cheap commodity hardware or a cloud environment, where hardware or infrastructure failures may occur.

49). Who uses Cassandra?
Cassandra is in wide use around the world, and usage is growing all the time. Companies like Netflix, eBay, Twitter, Reddit, and Ooyala all use Cassandra to power pieces of their architecture, and it is critical to the day-to-da operations of those organizations. to date, the largest publicly known Cassandra cluster by machine count has over 300TB of data spanning 400 machines.
Because of Cassandra's ability to handle high-volume data, it works well for a myriad of applications. This means that it's well suited to handling projects from the high-speed world of advertising technology in real time to the high-volume world of big-data analytics and everything in between. It is important to know your use case before moving forward to ensure things like proper deployment and good schema design.

50). When to use secondary indexes?
You want to query on a column that isn't the primary key and isn't part of a composite key. The column you want to be querying on has few unique values (what I mean by this is, say you have a column Town, that is a good choice for secondary indexing because lots of people will be form the same town, date of birth however will not be such a good choice).

Read More Questions:
Apache Cassandra Interview Questions Part1
Apache Cassandra Interview Questions Part2
Apache Cassandra Interview Questions Part3
Apache Cassandra Interview Questions Part4
Apache Cassandra Interview Questions Part5

Objective Apache Cassandra Questions and Answers pdf

31). Explain Zero Consistency?
In this write operations will be handled in the background, asynchronously. It is the fastest way to write data, and the one that is used to offer the least confidence that operations will succeed.

32). What do you understand by Thrift?
Thrift is the name of the RPC client used to communicate with the Cassandra server.

33). What do you understand by Kundera?
Kundera is an object-relational mapping (ORM) implementation for Cassandra written using Java annotations.

34). JMX stands for?
Java Management Extension

35). What is the difference between Cassandra, Hadoop Big Data, MongoDB, CouchDB?
http://www.interviewquestionspdf.com/2015/10/what-is-difference-between-cassandra.html

36). When to use Cassandra?
Being a part of NoSQL family Cassandra offers solution for problem where your requirement is to have very heavy write system and you want to have quite responsive reporting system on top of that stored data. Consider use case of Web analytic where log data is stored for each request and you want to built analytical platform around it to count hits by hour, by browser, by IP, etc in real time manner.

37). When should you not use Cassandra? OR When to use RDBMS instead of Cassandra?
Cassandra is based on NoSQL database and does not provide ACID and relational data property. If you have strong requirement of ACID property (for example Financial data), Cassandra would not be a fit in that case. Obviously, you can make work out of it, however you will end up writing lots of application code to handle ACID property and will loose on time to market badly. Also managing that kind of system with Cassandra would be complex and tedious for you.

38). What are secondary indexes?
Secondary indexes are indexes built over column values. In other words, let’s say you have a user table, which contains a user’s email. The primary index would be the user ID, so if you wanted to access a particular user’s email, you could look them up by their ID. However, to solve the inverse query given an email, fetch the user ID requires a secondary index.

39). When to use secondary indexes?
You want to query on a column that isn't the primary key and isn't part of a composite key. The column you want to be querying on has few unique values (what I mean by this is, say you have a column Town, that is a good choice for secondary indexing because lots of people will be form the same town, date of birth however will not be such a good choice).

40). When to avoid secondary indexes?
Try not using secondary indexes on columns contain a high count of unique values and that will produce few results.

Read More Questions:
Apache Cassandra Interview Questions Part1
Apache Cassandra Interview Questions Part2
Apache Cassandra Interview Questions Part3
Apache Cassandra Interview Questions Part4
Apache Cassandra Interview Questions Part5

Most recently Apache Cassandra Multiple choice Questions and Answers pdf

21: Explain the two types of compactions in Cassandra.
Compaction refers to a maintenance process in Cassandra , in which, the SSTables are reorganized for data optimization of data structures on the disk. There are two types of compaction in Cassandra:
Minor compaction: It starts automatically when a new table is created. Here, Cassandra condenses all the equally sized tables into one.
Major compaction: It is triggered manually using nodetool. It compacts all tables of a ColumnFamily into one.

22: Explain what is Cassandra-Cqlsh?
Cassandra-Cqlsh is a query language that enables users to communicate with its database. By using Cassandra cqlsh, you can do following things:
Define a schema
Insert a data, and
Execute a query

23: What is the use of “void close()” method?
This method is used to close the current session instance.

24: What are the collection data types provided by CQL?
There are three collection data types:
List : A list is a collection of one or more ordered elements.
Map : A map is a collection of key-value pairs.
Set : A set is a collection of one or more elements.

25: Describe Replication Factor?
Replication Factor is the measure of number of data copies existing. It is important to increase the replication factor to log into the cluster.

26). What is the syntax to create keyspace in Cassandra?
Syntax for creating keyspace in Cassandra is
CREATE KEYSPACE <identifier> WITH <properties>

27). What is a keyspace in Cassandra?
In Cassandra, a keyspace is a namespace that determines data replication on nodes. A cluster consist of one keyspace per node.

28). What is cqlsh?
cqlsh is a Python-based command-line client for cassandra.

29). Does Cassandra works on Windows?
Yes, Cassandra works pretty well on windows. Right now we have linux and windows compatible versions available.

30). What do you understand by Consistency in Cassandra?
Consistency means to synchronize and how up-to-date a row of Cassandra data is on all of its replicas.

Read More Questions:
Apache Cassandra Interview Questions Part1
Apache Cassandra Interview Questions Part2
Apache Cassandra Interview Questions Part3
Apache Cassandra Interview Questions Part4
Apache Cassandra Interview Questions Part5

Latest Apache Cassandra Multiple choice Questions and Answers pdf

11: Talk about the concept of tunable consistency in Cassandra.
Tunable Consistency is a characteristic that makes Cassandra a favored database choice of Developers, Analysts and Big data Architects. Consistency refers to the up-to-date and synchronized data rows on all their replicas. Cassandra’s Tunable Consistency allows users to select the consistency level best suited for their use cases. It supports two consistencies – Eventual Consistency and Strong Consistency.

12: What are the three components of Cassandra write?
The three components are:
Commitlog write
Memtable write
SStable write
Cassandra first writes data to a commit log and then to an in-memory table structure memtable and at last in SStable.

13: Explain zero consistency.
 In zero consistency the write operations will be handled in the background, asynchronously. It is the fastest way to write data.

14: Mention what are the values stored in the Cassandra Column?
There are three values in Cassandra Column. They are:
Column Name
Value
Time Stamp

15: What do you understand by Kundera?
Kundera is an object-relational mapping (ORM) implementation for Cassandra which is written using Java annotations.

16: What is the concept of SuperColumn in Cassandra?
Cassandra SuperColumn is a unique element consisting of similar collections of data. They are actually key-value pairs with values as columns. It is a sorted array of columns, and they follow a hierarchy when in action.

17: When do you have to avoid secondary indexes?
Try not using secondary indexes on columns containing a high count of unique values as that will produce few results.

18: List the steps in which Cassandra writes changed data into commitlog?
Cassandra concatenates changed data to commitlog. Then Commitlog acts as a crash recovery log for data. Until the changed data is concatenated to commitlog, write operation will never be considered successful.

19: What is the use of “ResultSet execute(Statement statement)” method?
This method is used to execute a query. It requires a statement object.

20: What is Thrift?
Thrift is the name of the Remote Procedure Call (RPC) client used to communicate with the Cassandra server.

Read More Questions:
Apache Cassandra Interview Questions Part1
Apache Cassandra Interview Questions Part2
Apache Cassandra Interview Questions Part3
Apache Cassandra Interview Questions Part4
Apache Cassandra Interview Questions Part5

Realtime Apache Cassandra Interview Questions and Answers pdf

1: How many types of NoSQL databases are there?
There are four types of NoSQL databases, namely:
Document Stores (MongoDB, Couchbase)
Key-Value Stores (Redis, Volgemort)
Column Stores (Cassandra)
Graph Stores (Neo4j, Giraph)

2: What do you understand by Commit log in Cassandra?
Commit log is a crash-recovery mechanism in Cassandra. Every write operation is written to the commit log.

3: Define Mem-table in Cassandra.
It is a memory-resident data structure. After commit log, the data will be written to the mem-table. Mem-table is in-memory/write-back cache space consisting of content in key and column format. The data in mem- table is sorted by key, and each column family consists of a distinct mem-table that retrieves column data via key. It stores the writes until it is full, and then flushed out.

4: What is SSTable?
SSTable or ‘Sorted String Table,’ refers to an important data file in Cassandra. It accepts regular written memtables which are stored on disk and exist for each Cassandra table. Being immutable, SStables do not allow any further addition and removal of data items once written. For each SSTable, Cassandra creates three separate files like partition index, partition summary and a bloom filter.

5: What is bloom filter?
Bloom filter is an off-heap data structure to check whether there is any data available in the SSTable before performing any I/O disk operation.

6: Establish the difference between a node, cluster & data centres in Cassandra.
Node is a single machine running Cassandra.
Cluster is a collection of nodes that have similar type of data grouped together.
Data centres are useful components when serving customers in different geographical areas. Different nodes of a cluster are grouped into different data centres.

7: Define composite type in Cassandra?
In Cassandra, composite type allows to define a key or a column name with a concatenation of data of different type. You can use two types of Composite Types:
Row Key
Column Name

8: What is Cassandra Data Model?
Cassandra Data Model consists of four main components, namely:
Cluster: These are made up of multiple nodes and keyspaces.
Keyspace: It is a namespace to group multiple column families, especially one per partition.
Column: It consists of a column name, value and timestamp
Column family: This refers to multiple columns with row key reference.

9: Explain what is a keyspace in Cassandra?
In Cassandra, a keyspace is a namespace that determines data replication on nodes. A cluster consists of one keyspace per node.

10: Elaborate on CQL?
A user can access Cassandra through its nodes using Cassandra Query Language (CQL). CQL treats the database (Keyspace) as a container of tables. Programmers use cqlsh: a prompt to work with CQL or separate application language drivers.

Read More Questions:
Apache Cassandra Interview Questions Part1
Apache Cassandra Interview Questions Part2
Apache Cassandra Interview Questions Part3
Apache Cassandra Interview Questions Part4
Apache Cassandra Interview Questions Part5

Objective Apache and Scala Questions and Answers pdf

21. What is an “Accumulator”?
“Accumulators” are Spark’s offline debuggers. Similar to “Hadoop Counters”, “Accumulators” provide the number of “events” in a program.
Accumulators are the variables that can be added through associative operations. Spark natively supports accumulators of numeric value types and standard mutable collections. “AggregrateByKey()” and “combineByKey()” uses accumulators.

22. Which file systems does Spark support?
Hadoop Distributed File System (HDFS)
Local File system
S3

23. What is “YARN”?
“YARN” is a large-scale, distributed operating system for big data applications. It is one of the key features of Spark, providing a central and resource management platform to deliver scalable operations across the cluster.

24. List the benefits of Spark over MapReduce.
Due to the availability of in-memory processing, Spark implements the processing around 10-100x faster than Hadoop MapReduce.
Unlike MapReduce, Spark provides in-built libraries to perform multiple tasks form the same core; like batch processing, steaming, machine learning, interactive SQL queries among others.
MapReduce is highly disk-dependent whereas Spark promotes caching and in-memory data storage
Spark is capable of iterative computation while MapReduce is not.
Additionally, Spark stores data in-memory whereas Hadoop stores data on the disk. Hadoop uses replication to achieve fault tolerance while Spark uses a different data storage model, resilient distributed datasets (RDD). It also uses a clever way of guaranteeing fault tolerance that minimizes network input and output.

25. What is a “Spark Executor”?
When “SparkContext” connects to a cluster manager, it acquires an “Executor” on the cluster nodes. “Executors” are Spark processes that run computations and store the data on the worker node. The final tasks by “SparkContext” are transferred to executors.

26. List the various types of “Cluster Managers” in Spark.
The Spark framework supports three kinds of Cluster Managers:
Standalone
Apache Mesos
YARN

27. What is a “worker node”?
“Worker node” refers to any node that can run the application code in a cluster.

28. Define “PageRank”.
“PageRank” is the measure of each vertex in a graph.

29. Can we do real-time processing using Spark SQL?

30. What is the biggest shortcoming of Spark?
Spark utilizes more storage space compared to Hadoop and MapReduce.
Also, Spark streaming is not actually streaming, in the sense that some of the window functions cannot properly work on top of micro batching.

Read More Questions:
Apache and Scala Interview Questions Part1
Apache and Scala Interview Questions Part2
Apache and Scala Interview Questions Part3