Friday, March 1, 2019

Industry practices and tools 2

The importance of the quality of code
This refers to how easy your code is able to use.

When you're making code in a project you will have to make sure others can comprehend what you're typing, someone who's less competent in programming may not understand in your group so you can just make the code usable so everyone can check and use the code

So to conclude, the quality of code is as important to a programmer as the quality of food is to a chef, because you'll have others testing and tasting the code, others consuming and using it and etc. so you must make it have high quality code that can be understood by other systems and humans alike and also make it so that later down the line it is easy to fix and maintain.

Readability refers to how well you can read the text and how well the layout has been made.

It's very important when your programming code is readable, this is because when you or someone else is reviewing the code it is best if it's comprehensible so any errors that may appear can be seen noticed easier , and also, if the code is cluttered then looking through it for errors or if you want to make changes will take much longer.


The three aspects of software quality are


  • functional quality 
  • structural quality
  • process quality. 


Checkstyle
PMD
  • Empty try/catch/finally/switch blocks.
  • Empty if/while statements.
  • Dead code.
  • Cases with direct implementation instead of an interface.
  • Too complicated methods.
  • Classes with high Cyclomatic Complexity measurements.
  • Unnecessary ifstatements for loops that could be whileloops.
  • Unused local variables, parameters, and private methods.
  • Override hashcode() method without the equals() method.
  • Wasteful String/StringBuffer usage.
  • Duplicated code copy/paste code can mean copy/paste bugs, and, thus, bring the decrease in maintainability.
FindBugs
  • scariest
  • scary
  • troubling
  • of concern
SonarQube
  • It doesnt show only whats wrong. It also offers quality management tools to help you put it right.
  • SonarQube addresses not only bugs but also coding rules, test coverage, code duplications, complexity, and architecture providing all the details in a dashboard.
  • It gives you a snapshot of your code quality at the certain moment of time as well as trends of lagging and leading quality indicators.
  • It provides you with code quality metrics to help you take the right decision.
  • There are code quality metrics that show your progress and whether youre getting better or worse.
Challenges with shared libraries
Front-ends for locally compiled packages
Maintenance of configuration
Repositories
Upgrade suppression
Cascading package removal
Comparison of command
It becomes useful when you have multiple attributes that you dont want to retype in under multiple children projects. Finally, dependency Management can be used to define a standard version of an artifact to use across multiple projects.
     1. AceProject
     2. Jira
     3. ProWorkflow
     4. Office Timeline
     5. Bridge24
Definition - What does Build Tool mean?
  • Abstract
  • Context
  • Objective
  • Method
  • Results
  • Conclusions 

  • Real-time reports
  • Role-based access control
  • Node management

Build Lifecycle Basics
A Build Lifecycle is Made Up of Phases
  • Builds
  • Documentation
  • Reporting
  • Dependencies
  • SCMs
  • Releases
  • Distribution
  • Mailing list
Item
Default
source code
${basedir}/src/main/java
Resources
${basedir}/src/main/resources
Tests
${basedir}/src/test
Complied byte code
${basedir}/target
distributable JAR
${basedir}/target/classes
Maven Code Style And Code Conventions
Generic Code Style And Convention
Java
Java Code Style
1.  public class MyMojo
2.  {
3.      // ----------------------------------------------------------------------
4.      // Mojo components
5.      // ----------------------------------------------------------------------
6.   
7.      /**
8.       * Artifact factory.
9.       *
10.      * @component
11.      */
12.     private ArtifactFactory artifactFactory;
13.  
14.     ...
15.  
16.     // ----------------------------------------------------------------------
17.     // Mojo parameters
18.     // ----------------------------------------------------------------------
19.  
20.     /**
21.      * The POM.
22.      *
23.      * @parameter expression="${project}"
24.      * @required
25.      */
26.     private MavenProject project;
27.  
28.     ...
29.  
30.     // ----------------------------------------------------------------------
31.     // Mojo options
32.     // ----------------------------------------------------------------------
33.     ...
34.  
35.     // ----------------------------------------------------------------------
36.     // Public methods
37.     // ----------------------------------------------------------------------
38.  
39.     /**
40.      * {@inheritDoc}
41.      */
42.     public void execute()
43.         throws MojoExecutionException
44.     {
45.       ...
46.     }
47.  
48.     // ----------------------------------------------------------------------
49.     // Protected methods
50.     // ----------------------------------------------------------------------
51.     ...
52.  
53.     // ----------------------------------------------------------------------
54.     // Private methods
55.     // ----------------------------------------------------------------------
56.     ...
57.  
58.     // ----------------------------------------------------------------------
59.     // Static methods
60.     // ----------------------------------------------------------------------
61.     ...
62. }
IntelliJ IDEA 4.5+
Eclipse 3.2+
Java Code Convention
JavaDoc Convention
XML
XML Code Style
1.  <aTag>
2.    <simpleType>This is a simple type</simpleType>
3.   
4.    <complexType>
5.      <simpleType>This is a complex type</simpleType>
6.    </complexType>
7.  </aTag>
8.      <!-- Simple XML documentation                                               -->
9.      <!-- ====================================================================== -->
10.     <!-- Block documentation                                                    -->
11.     <!-- ====================================================================== -->
Generic XML Code Convention
POM Code Convention
1.  <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
2.    <modelVersion/>
3.   
4.    <parent/>
5.   
6.    <groupId/>
7.    <artifactId/>
8.    <version/>
9.    <packaging/>
10.  
11.   <name/>
12.   <description/>
13.   <url/>
14.   <inceptionYear/>
15.   <organization/>
16.   <licenses/>
17.  
18.   <developers/>
19.   <contributors/>
20.  
21.   <mailingLists/>
22.  
23.   <prerequisites/>
24.  
25.   <modules/>
26.  
27.   <scm/>
28.   <issueManagement/>
29.   <ciManagement/>
30.   <distributionManagement/>
31.  
32.   <properties/>
33.  
34.   <dependencyManagement/>
35.   <dependencies/>
36.  
37.   <repositories/>
38.   <pluginRepositories/>
39.  
40.   <build/>
41.  
42.   <reporting/>
43.  
44.   <profiles/>
45. </project>
XDOC Code Convention
FML Code Convention
A Build Lifecycle is Made Up of Phases
Sr.No.
Lifecycle Phase & Description
1
validate
Validates whether project is correct and all necessary information is available to complete the build process.
2
initialize
Initializes build state, for example set properties.
3
generate-sources
Generate any source code to be included in compilation phase.
4
process-sources
Process the source code, for example, filter any value.
5
generate-resources
Generate resources to be included in the package.
6
process-resources
Copy and process the resources into the destination directory, ready for packaging phase.
7
compile
Compile the source code of the project.
8
process-classes
Post-process the generated files from compilation, for example to do bytecode enhancement/optimization on Java classes.
9
generate-test-sources
Generate any test source code to be included in compilation phase.
10
process-test-sources
Process the test source code, for example, filter any values.
11
test-compile
Compile the test source code into the test destination directory.
12
process-test-classes
Process the generated files from test code file compilation.
13
test
Run tests using a suitable unit testing framework (Junit is one).
14
prepare-package
Perform any operations necessary to prepare a package before the actual packaging.
15
package
Take the compiled code and package it in its distributable format, such as a JAR, WAR, or EAR file.
16
pre-integration-test
Perform actions required before integration tests are executed. For example, setting up the required environment.
17
integration-test
Process and deploy the package if necessary into an environment where integration tests can be run.
18
post-integration-test
Perform actions required after integration tests have been executed. For example, cleaning up the environment.
19
verify
Run any check-ups to verify the package is valid and meets quality criteria.
20
install
Install the package into the local repository, which can be used as a dependency in other projects locally.
21
deploy
Copies the final package to the remote repository for sharing with other developers and projects.
What is Build Profile?
Types of Build Profile
Type
Where it is defined
Per Project
Defined in the project POM file, pom.xml
Per User
Defined in Maven settings xml file (%USER_HOME%/.m2/settings.xml)
Global
Defined in Maven global settings xml file (%M2_HOME%/conf/settings.xml)
  • compiler:compile the compile goal from the compiler plugin is bound to the compile phase
  • compiler:testCompile is bound to the test-compile phase
  • surefire:test is bound to test phase
  • install:install is bound to install phase
  • jar:jar and war:war is bound to package phase
1
mvn help:describe -Dcmd=PHASENAME
1
mvn help:describe -Dcmd=compile
1
2
compile' is a phase corresponding to this plugin:
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile
Setting Up Your Project to Use the Build Lifecycle
Packaging
Phase
plugin:goal
process-resources
resources:resources
compile
compiler:compile
process-test-resources
resources:testResources
test-compile
compiler:testCompile
test
surefire:test
package
jar:jar
install
install:install
deploy
deploy:deploy
Plugins
...
<plugin>
   <groupId>org.codehaus.modello</groupId>
   <artifactId>modello-maven-plugin</artifactId>
   <version>1.8.1</version>
   <executions>
     <execution>
       <configuration>
         <models>
           <model>src/main/mdo/maven.mdo</model>
         </models>
         <version>4.0.0</version>
       </configuration>
       <goals>
         <goal>java</goal>
       </goals>
     </execution>
   </executions>
</plugin>
...
...
<plugin>
   <groupId>com.mycompany.example</groupId>
   <artifactId>display-maven-plugin</artifactId>
   <version>1.0</version>
   <executions>
     <execution>
       <phase>process-test-resources</phase>
       <goals>
         <goal>time</goal>
       </goals>
     </execution>
   </executions>
</plugin>

applications in Hadoop clusters 100 times faster in memory and 10 times faster
on disk. Spark is built on data science and its concept makes data science effortless
concept makes data science effortless. Spark is also popular for data pipelines and machine learning models development
also popular for data pipelines and machine learning models development.
Spark also includes a library MLlib, that provides a progressive set of machine algorithms for repetitive data science techniques like Classification, Regression, Collaborative Filtering, Clustering, etc
progressive set of machine algorithms for repetitive data science techniques
like Classification, Regression, Collaborative Filtering, Clustering, etc.


x


Each one is worth looking at in more detail.

Functional quality means that the software correctly performs the tasks it’s intended to do for its users. Among the
attributes of functional quality are:
Meeting the specified requirements. Whether they come from the project’s sponsors or the software’s
intended users, meeting requirements is the sine qua non of functional quality. In some cases, this might even
include compliance with applicable laws and regulations. And since requirements commonly change 4
throughout the development process, achieving this goal requires the development team to understand and
implement the correct requirements throughout, not just those initially defined for the project.
Creating software that has few defects. Among these are bugs that reduce the software’s reliability,
compromise its security, or limit its functionality. Achieving zero defects is too much to ask for most projects,
but users are rarely happy with software they perceive as buggy.
Good enough performance. From a user’s point of view, there’s no such thing as a good, slow application.
Ease of learning and ease of use. To its users, the software’s user interface is the application, and so these
attributes of functional quality are most commonly provided by an effective interface and a well-thought-out
user workflow. The aesthetics of the interface—how beautiful it is—can also be important, especially in
consumer applications.
Software testing commonly focuses on functional quality. All of the characteristics just listed can be tested, at least
to some degree, and so a large part of ensuring functional quality boils down to testing.
The second aspect of software quality, structural quality, means that the code itself is well structured. Unlike
functional quality, structural quality is hard to test for (although there are tools to help measure it, as described
later). The attributes of this type of quality include:
Code testability. Is the code organized in a way that makes testing easy?
Code maintainability. How easy is it to add new code or change existing code without introducing bugs?
Code understandability. Is the code readable? Is it more complex than it needs to be? These have a large
impact on how quickly new developers can begin working with an existing code base.
Code efficiency. Especially in resource-constrained situations, writing efficient code can be critically important.
Code security. Does the software allow common attacks such as buffer overruns and SQL injection? Is it
insecure in other ways?
Both functional quality and structural quality are important, and they usually get the lion’s share of attention in
discussions of software quality. Yet the third aspect, process quality, is also critically important. The quality of the
development process significantly affects the value received by users, development teams, and sponsors, and so
all three groups have a stake in improving this aspect of software quality.
The most obvious attributes of process quality include these:
Meeting delivery dates. Was the software delivered on time?
Meeting budgets. Was the software delivered for the expected amount of money?
A repeatable development process that reliably delivers quality software. If a process has the first two
attributes—software delivered on time and on budget—but so stresses the development team that its best
members quit, it isn’t a quality process. True process quality means being consistent from one project to the
next.

The third group, sponsors, cares about everything: functional quality, structural quality, and process quality. If
they’re smart, the people paying for the project know that slacking off in any area is a poor long-term strategy. In
the end, sponsors are striving to create business value, and the best way to do this is by taking a broad view of
software quality. They must also understand the connection between quality and risk. The risk of accepting lower
software quality in, say, a community website, is much less than the risk of allowing lower quality in an airplane’s
flight control system. Making the choice appropriately commonly requires trade-offs among competing goals.



You could apply various metrics to the code. There are tools to report many of these metrics:
·        Count of open reported defects in a given product.
·        Defect density. Take the number of defects found per the number of source lines of code in the product. Lower is better. However, this metric does ignore defects that haven't been recorded.
·        Fan-in and Fan-out. Fan-in is the number of modules that a given module calls. Fan-out is the number of modules that call the module.
·        Coupling. Consider the number of inputs, outputs, global variables used, module, fan-in, and fan-out. Wikipedia provides a formula to compute coupling.
·        Cyclomatic complexity. This measures the number of paths through a given block of code. The cyclomatic complexity for a block is the upper bound of tests to achieve complete branch coverage. If all paths through the code are actually possible, then this is also the upper bound on test cases needed for path coverage.
·        Halstead complexity measures of program vocabulary, program length, calculated program length, volume, difficulty, and effort. The difficulty is especially useful as it is a representation of how complex the code is to understand. There is also a calculation for the estimated number of bugs in an implementation.
·        Count of open static or dynamic analysis findings. Various tools exist to examine the source code, binary files, and execution paths of software to find possible errors automatically. These findings can be reported as a measure of quality.


You can also take a qualitative approach. Sometimes, the best measure of code quality is to ask someone to look at it and comment on it. This is easiest if you also have a style guide and consistent rules for how you write your code (from formatting through naming conventions). If you want to know about how readable or maintainable your code is, sometimes the best thing to do is to just ask someone else.
Usually, the compiler catches the syntactic and arithmetic issues and lists out a stack trace. But there still might be some issues that compiler does not catch. These could be inappropriately implemented requirements, incorrect algorithm, bad code structure or some sort of potential issues that community knows from experience.
The only way to catch such mistakes is to have some senior developer to review your code. Such approach is not a panacea and does not change much. With each new developer in the team, you should have an extra pair of eyes which will look at his/her code. But luckily there are many tools which can help you control the code quality including Checkstyle, PMD, FindBugs, SonarQube etc. All of them are usually used to analyze the quality and build some useful reports. Very often those reports are published by continuous integration servers, like Jenkins.
Here is a checklist of Java static code analysis tools, that we use at RomexSoft in most of our projects. Lets review each of them.
Code reviews are essential to code quality, but usually, no one in the team wants to review tens of thousands lines of code. But the challenges associated with manually code reviews can be automated by source code analyzers tool like Checkstyle.
Checkstyle is a free and open source static code analysis tool used in software development for checking whether Java code conforms to the coding conventions you have established. It automates the crucial but boring task of checking Java code. It is one of the most popular tools used to automate the code review process.
Checkstyle comes with predefined rules that help in maintaining the code standards. These rules are a good starting point but they do not account for project-specific requirements. The trick to gain a successful automated code review is to combine the built-in rules with custom ones as there is a variety of tutorials with how-tos.
Checkstyle can be used as an Eclipse plugin or as the part of a built systems such as Ant, Maven or Gradle to validate code and create reports coding-standard violations.
PMD is a static code analysis tool that is capable to automatically detect a wide range of potential bugs and unsafe or non-optimized code. It examines Java source code and looks for potential problems such as possible bugs, dead code, suboptimal code, overcomplicated expressions, and duplicated code.
Whereas other tools, such as Checkstyle, can verify whether coding conventions and standards are respected, PMD focuses more on preemptive defect detection. It comes with a rich and highly configurable set of rules that you can easily configure and choose which particular rules should be used for a given project.
The same as Checkstyle, PMD can be used with Eclipse, IntelliJ IDEA, Maven, Gradle or Jenkins.
Here are a few cases of bad practices that PMD deals with:
FindBugs is an open source Java code quality tool similar in some ways to Checkstyle and PMD, but with a quite different focus. FindBugs doesnt concern the formatting or coding standards but is only marginally interested in best practices.
In fact, it concentrates on detecting potential bugs and performance issues and does a very good job of detecting a variety of many types of common hard-to-find coding mistakes, including thread synchronization problems, null pointer dereferences, infinite recursive loops, misuse of API methods etc. FindBugs operates on Java bytecode, rather than source code. Indeed, it is capable of detecting quite a different set of issues with a relatively high degree of precision in comparison to PMD or Checkstyle. As such, it can be a useful addition to your static analysis toolbox.
FindBugs is mainly used for identifying hundreds of serious defects in large applications that are classified in four ranks:
Lets take a closer look at some cases of bugs.
Infinite recursive loop
public String resultValue() {
return this.resultValue();
}
Here, the resultValue() method is called recursive inside itself.
Null Pointer Exception
FindBug examines the code for the statement that will surely cause the NullPointerException.
Object obj = null;
obj.doSomeThing(); //code execution will cause the NullPointerException
The below code is a relatively simple bug. If the objvariable will contain null and strvariable has some instance it will surely lead to the NullPointerException.
if((str == null && obj == null) || str.equals(obj)) {
//do something
}
Method whose return value should not be ignored
The string is an immutable object. So ignoring the return value of the method will be reported as a bug.
String str = “Java;
str.toUpper();
if (str.equals(“JAVA”))
Suspicious equal() comparison
The method calls equals(Object) on references of different class types with no common subclasses.
Integer value = new Integer(10);
String str = new String(“10”);
if (str != null && !str.equals(value)) {
//do something;
}
The objects of different classes should always compare as unequal, therefore !str.equals(value) comparison will always return false.
Hash equals mismatch
The class that overrides equals(Object) but does not override hashCode() and uses the inherent implementation of hashCode() from java.lang.Object will likely violate the invariant that equal objects must have equal hashcodes.
Class does not override equals in superclass
Heres a case: the child class that extends a parent class (which defines an equals method) adds new fields but does not override an equals method itself. Thereby, equality on instances of child class will use the inherited equals method and, as a result, it will ignore the identity of the child class and the newly added fields.
To sum up, FineBug is distributed as a stand-alone GUI application but there are also plugins available for Eclipse, NetBeans, IntelliJ IDEA, Gradle, Maven, and Jenkins. Additional rule sets can be plugged in FindBugs to increase the set of checks performed.
SonarQube is an open source platform which was originally launched in 2007 and is used by developers to manage source code quality. Sonar was designed to support global continuous improvement strategy on code quality within a company and therefore can be used as a shared central system for quality management. It makes management of code quality possible for any developer in the team. As a result, in recent years it has become a worlds leader in Continuous Inspection of code quality management systems.
Sonar currently supports a wide variety of languages including Java, C/C++, C#, PHP, Flex, Groovy, JavaScript, Python, and PL/SQL (some of them via additional plugins). And Sonar is very useful as it offers fully automated analyses tools and integrates well with Maven, Ant, Gradle, and continuous integration tools.
Sonar uses FindBugs, Checkstyle and PMD to collect and analyze source code for bugs, bad code, and possible violation of code style policies. It examines and evaluates different aspects of your source code from minor styling details, potential bugs, and code defects to the critical design errors, lack of test coverage, and excess complexity. At the end, Sonar produces metric values and statistics, revealing problematic areas in the source that require inspection or improvement.
Here is a list of some of SonarQubes features:
SonarQube is a web application that can be installed standalone or inside the existing Java web application. The code quality metrics can be captured by running mvn sonar:sonar on your project.
Your pom.xml file will need a reference to this plugin because it is not a default maven plugin.
<build>
               
               <plugins>
                              <plugin>
                                             <groupId>org.sonarsource.scanner.maven</groupId>
                                             <artifactId>sonar-maven-plugin</artifactId>
                                             <version>3.3.0.603</version>
                              </plugin>
               </plugins>
               
</build>
Also, Sonar provides an enhanced reporting via multiple views that show certain metrics (you can configure which ones you want to see) for all projects. And whats most important, it does not only provide metrics and statistics about your code but translates these nondescript values to real business values such as risk and technical debt.

A software package is an archive file containing a computer program as well as necessary metadata for its deployment. The computer program can be in source codethat has to be compiled and built first.Package metadata include package description, package version, and dependencies (other packages that need to be installed beforehand).
Package managers are charged with the task of finding, installing, maintaining or uninstalling software packages upon the user's command. Typical functions of a package management system include:
·        Working with file archivers to extract package archives
·        Ensuring the integrity and authenticity of the package by verifying their digital certificates and checksums
·        Looking up, downloading, installing or updating existing software from a software repository or app store
·        Grouping packages by function to reduce user confusion
·        Managing dependencies to ensure a package is installed with all packages it requires, thus avoiding "dependency hell"
Computer systems that rely on dynamic library linking, instead of static library linking, share executable libraries of machine instructions across packages and applications. In these systems, complex relationships between different packages requiring different versions of libraries results in a challenge colloquially known as "dependency hell". On Microsoft Windows systems, this is also called "DLL hell" when working with dynamically linked libraries. Good package management is vital on these systems.The Framework system from OPENSTEP was an attempt at solving this issue, by allowing multiple versions of libraries to be installed simultaneously, and for software packages to specify which version they were linked against.
System administrators may install and maintain software using tools other than package management software. For example, a local administrator may download unpackaged source code, compile it, and install it. This may cause the state of the local system to fall out of synchronization with the state of the package manager's database. The local administrator will be required to take additional measures, such as manually managing some dependencies or integrating the changes into the package manager.
There are tools available to ensure that locally compiled packages are integrated with the package management. For distributions based on .deb and .rpm files as well as Slackware Linux, there is CheckInstall, and for recipe-based systems such as Gentoo Linux and hybrid systems such as Arch Linux, it is possible to write a recipe first, which then ensures that the package fits into the local package database.
Particularly troublesome with software upgrades are upgrades of configuration files. Since package managers, at least on Unix systems, originated as extensions of file archiving utilities, they can usually only either overwrite or retain configuration files, rather than applying rules to them. There are exceptions to this that usually apply to kernel configuration (which, if broken, will render the computer unusable after a restart). Problems can be caused if the format of configuration files changes; for instance, if the old configuration file does not explicitly disable new options that should be disabled. Some package managers, such as Debian's dpkg, allow configuration during installation. In other situations, it is desirable to install packages with the default configuration and then overwrite this configuration, for instance, in headless installations to a large number of computers. This kind of pre-configured installation is also supported by dpkg.
To give users more control over the kinds of software that they are allowing to be installed on their system (and sometimes due to legal or convenience reasons on the distributors' side), software is often downloaded from a number of software repositories.
When a user interacts with the package management software to bring about an upgrade, it is customary to present the user with the list of actions to be executed (usually the list of packages to be upgraded, and possibly giving the old and new version numbers), and allow the user to either accept the upgrade in bulk, or select individual packages for upgrades. Many package managers can be configured to never upgrade certain packages, or to upgrade them only when critical vulnerabilities or instabilities are found in the previous version, as defined by the packager of the software. This process is sometimes called version pinning.
For instance:
·        yum supports this with the syntax exclude=openoffice*
·        pacman with IgnorePkg = openoffice (to suppress upgrading openoffice in both cases)
·        dpkg and dselect support this partially through the hold flag in package selections
·        APT extends the hold flag through the complex "pinning" mechanism
·        Users can also blacklist a package
·        aptitude has "hold" and "forbid" flags
·        portage supports this through the package.mask configuration file
Some of the more advanced package management features offer "cascading package removal", in which all packages that depend on the target package and all packages that only the target package depends on, are also removed.
Although the commands are specific for every particular package manager, they are to a large extent translatable, as most package managers offer similar functions.


Dependency Management is used to pull all the dependency information into a common POM file, simplifying the references in the child POM file.
Managing your dependencies manually in any programming language is a huge pain. This is why in most programming languages today you will find that they all have some implementation of a dependency management system or sometimes a package manger. 





AceProject is a web-based project tracking software that helps manage projects from end to end. It is a complete project management solution for individuals, teams and enterprises that need to take control of their important workflows and leave nothing to chance. AceProject provides the tools for projects to remain on time and on budget with its time and expense tracking features. Entering time is very easy, almost automated, and convenient with a Time Clock. Users can easily stay on top of all their projects with a project Dashboard that gives instant information with color-coded graphs and details. With Gantt charts, they can view the intricacies of a project and its progress to be able to make informed decisions and necessary actions.


Jira is an agile project management software used by development teams to plan, track, and release software. It is a popular tool designed specifically and used by agile teams. Aside from creating stories, planning sprints, tracking issues, and shipping up-to-date software, users also generate reports that help improve teams, and create their own workflows. As part of Atlassian, it integrates with many tools that enable teams to manage their projects and products from end to end.


Proworkflow is a web-based project management software that enables users to manage tasks and projects, track time, organize contacts, and generate reports for their business. It is a productivity application that provides a comprehensive set of features, yet still easy to use for all members of the team. Aside from the great tools and functionality from the software, customers also enjoy free quality support through consultations and trainings that help get their businesses up and running.


Office Timeline is a PowerPoint timeline maker timeline built for professionals. Easy to produce Gantt charts and timelines directly into Microsoft Powerpoint.


Bridge24 is a reporting and exporting application that enhances the functionality of Asana, Trello, Basecamp, and AceProject. With a one-click dynamic connection, users get to access powerful and flexible tools that enable them extract greater value out of their project data. With a variety of views, filters, advanced reports, interactive charts and exporting tools, they are able to access, organize and categorize valuable and sometimes hidden information.  New perspectives and insights allow managers and users to make timely and informed decisions.



Build tools are programs that automate the creation of executable applications from source code. Building incorporates compiling, linking and packaging the code into a usable or executable form. In small projects, developers will often manually invoke the build process. This is not practical for larger projects, where it is very hard to keep track of what needs to be built, in what sequence and what dependencies there are in the building process. Using an automation tool allows the build process to be more consistent.


Empirical findings from ten software teams from two large-scale software development projects in Ericsson and ABB demonstrated that teams receive and share their knowledge with a large number of contacts, including other team members, experts, administrative roles, and support roles.
Along with human and organizational capital, social capital and networking are also necessary for participation in large-scale software development, both for novice teams and for mature teams working on complex, unfamiliar, or interdependent tasks.
Social capital has the potential to compensate for gaps in human capital (i.e., an individual's knowledge and skills).
The team network size and networking behavior depend on the following factors: company experience, employee turnover, team culture, need for networking, and organizational support.
Along with investments in training programs, software companies should also cultivate a networking culture to strengthen their social capital and achieve better performance.


Large software development projects involve multiple interconnected teams, often spread around the world, developing complex products for a growing number of customers and users. Succeeding with large-scale software development requires access to an enormous amount of knowledge and skills. Since neither individuals nor teams can possibly possess all the needed expertise, the resource availability in a team's knowledge network, also known as social capital, and effective knowledge coordination become paramount.
In this paper, we explore the role of social capital in terms of knowledge networks and networking behavior in large-scale software development projects.
We conducted a multi-case study in two organizations, Ericsson and ABB, with software development teams as embedded units of analysis. We organized focus groups with ten software teams and surveyed 61 members from these teams to characterize and visualize the teamsknowledge networks. To complement the team perspective, we conducted individual interviews with representatives of supporting and coordination roles. Based on survey data, data obtained from focus groups, and individual interviews, we compared the different network characteristics and mechanisms that support knowledge networks. We used social network analysis to construct the team networks, thematic coding to identify network characteristics and context factors, and tabular summaries to identify the trends.
Our findings indicate that social capital and networking are essential for both novice and mature teams when solving complex, unfamiliar, or interdependent tasks. Network size and networking behavior depend on company experience, employee turnover, team culture, need for networking, and organizational support. A number of mechanisms can support the development of knowledge networks and social capital, for example, introduction of formal technical experts, facilitation of communities of practice and adequate communication infrastructure.
Our study emphasizes the importance of social capital and knowledge networks. Therefore, we suggest that, along with investments into training programs, software companies should also cultivate a networking culture to strengthen their social capital, a known driver of better performance.


1. Gradle
Your DevOps tool stack will need a reliable build tool. Apache Ant and Maven dominated the automated build tools market for many years, but Gradle showed up on the scene in 2009, and its popularity has steadily grown since then. Gradle is an incredibly versatile tool which allows you to write your code in Java, C++, Python, or other languages. Gradle is also supported by popular IDEs such as Netbeans, Eclipse, and IntelliJ IDEA. If that doesnt convince you, it might help to know that Google also chose it as the official build tool for Android Studio.
While Maven and Ant use XML for configuration, Gradle introduces a Groovy-based DSL for describing builds. In 2016, the Gradle team also released a Kotlin-based DSL, so now you can write your build scripts in Kotlin as well. This means that Gradle does have some learning curves, so it can help a lot if you have used Groovy, Kotlin or another JVM language before. Besides, Gradle uses Mavens repository format, so dependency management will be familiar if you have prior experience with Maven. You can also import your Ant builds into Gradle.
The best thing about Gradle is incremental builds, as they save a nice amount of compile time. According to Gradleperformance measurements, its up to 100 times faster than Maven. This is in part because of incrementality, but also due to Gradlebuild cache and daemon. The build cache reuses task outputs, while the Gradle Daemon keeps build information hot in memory in-between builds.
All in all, Gradle allows faster shipping and comes with a lot of configuration possibilities.
2. Git
Git is one of the most popular DevOps tools, widely used across the software industry. Its a distributed SCM (source code management) tool, loved by remote teams and open source contributors. Git allows you to track the progress of your development work. You can save different versions of your source code and return to a previous version when necessary. Its also great for experimenting, as you can create separate branches and merge new features only when theyre ready to go.
To integrate Git with your DevOps workflow, you also need to host repositories where your team members can push their work. Currently, the two best online Git repo hosting services are GitHuband Bitbucket. GitHub is more well-known, but Bitbucket comes with free unlimited private repos for small teams (up to five team members). With GitHub, you get access only to public repos for freewhich is still a great solution for many projects.
Both GitHub and Bitbucket have fantastic integrations. For example, you can integrate them with Slack, so everyone on your team gets notified whenever someone makes a new commit.
3. Jenkins
Jenkins is the go-to DevOps automation tool for many software development teams. Its an open source CI/CD server that allows you to automate the different stages of your delivery pipeline. The main reason for Jenkinspopularity is its huge plugin ecosystem. Currently, it offers more than 1,000 plugins, so it integrates with almost all DevOps tools, from Docker to Puppet.
With Jenkins, you can set up and customize your CI/CD pipeline according to your own needs. I found the following example in the Jenkins Docs. And, this is just one of the possibilities. Nice, isnt it?
Its easy to get started with Jenkins, as it runs out-of-the-box on Windows, Mac OS X, and Linux. You can also easily install it with Docker. You can set up and configure your Jenkins server through a web interface. If you are a first-time user, you can choose to install it with frequently used plugins. However, you can create your own custom config as well.
With Jenkins, you can iterate and deploy new code as quickly as possible. It also allows you to measure the success of each step of your pipeline. Ive heard people complaining about Jenkins’ “uglyand non-intuitive UI. However, I could still find everything I wanted without any problem.
4. Bamboo
Bamboo is Atlassians CI/CD server solution that has many similar features to Jenkins. Both are popular DevOps tools that allow you to automate your delivery pipeline, from builds to deployment. However, while Jenkins is open source, Bamboo comes with a price tag. So, heres the eternal question: is it worth choosing proprietary software if theres a free alternative? It depends on your budget and goals.
Bamboo has many pre-built functionalities that you have to set up manually in Jenkins. This is also the reason why Bamboo has fewer plugins (around 100 compared to Jenkins1000+). In fact, you dont need that many plugins with Bamboo, as it does many things out-of-the-box.
Bamboo seamlessly integrates with other Atlassian products such as Jira and Bitbucket. You also have access to built-in Git and Mercurial branching workflows and test environments. All in all, Bamboo can save you a lot of configuration time. It also comes with a more intuitive UI with tooltips, auto-completion, and other handy features.
5. Docker
Docker has been the number one container platform since its launch in 2013 and continues to improve. Its also thought of as one of the most important DevOps tools out there. Docker has made containerization popular in the tech world, mainly because it makes distributed development possible and automates the deployment of your apps. It isolates applications into separate containers, so they become portable and more secure. Docker apps are also OS and platform independent. You can use Docker containers instead of virtual machines such as VirtualBox.
What I like the most about Docker is that you dont have to worry about dependency management. You can package all dependencies within the apps container and ship the whole thing as an independent unit. Then, you can run the app on any machine or platform without a headache.
Docker integrates with Jenkins and Bamboo, too. If you use it together with one of these automation servers, you can further improve your delivery workflow. Besides, Docker is also great for cloud computing. In recent years, all major cloud providers such as AWS and Google Cloud added support for Docker. So, if you are planning a cloud migration, Docker can ease the process for you.
6. Kubernetes
This year, everyone is talking about Kubernetes. Its a container orchestration platform that takes containerization to the next level. It works well with Docker or any of its alternatives. Kubernetes is still very new; its first release came out in 2015. It was founded by a couple of Google engineers who wanted to find a solution to manage containers at scale. With Kubernetes, you can group your containers into logical units.
You may not need a container orchestration platform if you have just a few containers. However, its the next logical step when you reach a certain level of complexity and need to scale your resources. Kubernetes allows you to automate the process of managing hundreds of containers.
With Kubernetes, you dont have to tie your containerized apps to a single machine. Instead, you can deploy it to a cluster of computers. Kubernetes automates the distribution and scheduling of containers across the whole cluster.
A Kubernetes cluster consists of one master and several worker nodes. The master node implements your pre-defined rules and deploys the containers to the worker nodes. Kubernetes pays attention to everything. For instance, it notices when a worker node is down and redistributes the containers whenever its necessary.
7. Puppet Enterprise
Puppet Enterprise is a cross-platform configuration management platform. It allows you to manage your infrastructure as code. As it automates infrastructure management, you can deliver software faster and more securely. Puppet also provides developers with an open-source tool for smaller projects. However, if you are dealing with a larger infrastructure, you may find value in Puppet Enterpriseextra features, such as:
With Puppet Enterprise, you can manage multiple teams and thousands of resources. It automatically understands relationships within your infrastructure. It deals with dependencies and handles failures smartly. When it encounters a failed configuration, it skips all the dependent configurations as well. The best thing about Puppet is that it has more than 5,000 modules and integrates with many popular DevOps tools.
8. Ansible
Ansible is a configuration management tool, similar to Puppet and Chef. You can use it to configure your infrastructure and automate deployment. Its main selling points compared to other similar DevOps tools are simplicity and ease of use. Ansible follows the same Infrastructure As Code (IAC) approach as Puppet. However, it uses the super simple YAML syntax. With Ansible, you can define tasks in YAML, while Puppet has its own declarative language.
Agentless architecture is another frequently mentioned feature of Ansible. As no daemons or agents run in the background, Ansible is a secure and lightweight solution for configuration management automation. Similar to Puppet, Ansible also has several modules.
If you want to better understand how Ansible fits into the DevOps workflow take a look at this postby the Red Hat Blog. It shows how to use Ansible for environment provisioning and application deployment within a Jenkins pipeline.


Maven is based around the central concept of a build lifecycle. What this means is that the process for building and distributing a particular artifact (project) is clearly defined.
For the person building a project, this means that it is only necessary to learn a small set of commands to build any Maven project, and the POM will ensure they get the results they desired.
There are three built-in build lifecycles: default, clean and site. The default lifecycle handles your project deployment, the clean lifecycle handles project cleaning, while the site lifecycle handles the creation of your project's site documentation.
Each of these build lifecycles is defined by a different list of build phases, wherein a build phase represents a stage in the lifecycle.
For example, the default lifecycle comprises of the following phases (for a complete list of the lifecycle phases, refer to the Lifecycle Reference):
§  validate - validate the project is correct and all necessary information is available
§  compile - compile the source code of the project
§  test - test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed
§  package - take the compiled code and package it in its distributable format, such as a JAR.
§  verify - run any checks on results of integration tests to ensure quality criteria are met
§  install - install the package into the local repository, for use as a dependency in other projects locally
§  deploy - done in the build environment, copies the final package to the remote repository for sharing with other developers and projects.
These lifecycle phases (plus the other lifecycle phases not shown here) are executed sequentially to complete the default lifecycle. Given the lifecycle phases above, this means that when the default lifecycle is used, Maven will first validate the project, then will try to compile the sources, run those against the tests, package the binaries (e.g. jar), run integration tests against that package, verify the integration tests, install the verified package to the local repository, then deploy the installed package to a remote repository.


What is Maven?
Maven is a project management and comprehension tool that provides developers a complete build lifecycle framework. Development team can automate the project's build infrastructure in almost no time as Maven uses a standard directory layout and a default build lifecycle.
In case of multiple development teams environment, Maven can set-up the way to work as per standards in a very short time. As most of the project setups are simple and reusable, Maven makes life of developer easy while creating reports, checks, build and testing automation setups.
Maven provides developers ways to manage the following
To summarize, Maven simplifies and standardizes the project build process. It handles compilation, distribution, documentation, team collaboration and other tasks seamlessly. Maven increases reusability and takes care of most of the build related tasks.
Maven Evolution
Maven was originally designed to simplify building processes in Jakarta Turbine project. There were several projects and each project contained slightly different ANT build files. JARs were checked into CVS.
Apache group then developed Maven which can build multiple projects together, publish projects information, deploy projects, share JARs across several projects and help in collaboration of teams.
Objective
The primary goal of Maven is to provide developer with the following
·      A comprehensive model for projects, which is reusable, maintainable, and easier to comprehend.
·      Plugins or tools that interact with this declarative model.
Maven project structure and contents are declared in an xml file, pom.xml, referred as Project Object Model (POM), which is the fundamental unit of the entire Maven system. In later chapters, we will explain POM in detail.
Convention over Configuration
Maven uses Convention over Configuration, which means developers are not required to create build process themselves.
Developers do not have to mention each and every configuration detail. Maven provides sensible default behavior for projects. When a Maven project is created, Maven creates default project structure. Developer is only required to place files accordingly and he/she need not to define any configuration in pom.xml.
As an example, following table shows the default values for project source code files, resource files and other configurations. Assuming, ${basedir}denotes the project location
In order to build the project, Maven provides developers with options to mention life-cycle goals and project dependencies (that rely on Maven plugin capabilities and on its default conventions). Much of the project management and build related tasks are maintained by Maven plugins.
Developers can build any given Maven project without the need to understand how the individual plugins work. We will discuss Maven Plugins in detail in the later chapters.
Features of Maven
·      Simple project setup that follows best practices.
·      Consistent usage across all projects.
·      Dependency management including automatic updating.
·      A large and growing repository of libraries.
·      Extensible, with the ability to easily write plugins in Java or scripting languages.
·      Instant access to new features with little or no extra configuration.
·      Model-based builds Maven is able to build any number of projects into predefined output types such as jar, war, metadata.
·      Coherent site of project information Using the same metadata as per the build process, maven is able to generate a website and a PDF including complete documentation.
·      Release management and distribution publication Without additional configuration, maven will integrate with your source control system such as CVS and manages the release of a project.
·      Backward Compatibility You can easily port the multiple modules of a project into Maven 3 from older versions of Maven. It can support the older versions also.
·      Automatic parent versioning No need to specify the parent in the sub module for maintenance.
·      Parallel builds It analyzes the project dependency graph and enables you to build schedule modules in parallel. Using this, you can achieve the performance improvements of 20-50%.
·      Better Error and Integrity Reporting Maven improved error reporting, and it provides you with a link to the Maven wiki page where you will get full description of the error.


Convention over configuration (also known as coding by convention) is a software design paradigm used by software frameworks that attempts to decrease the number of decisions that a developer using the framework is required to make without necessarily losing flexibility. The concept was introduced by David Heinemeier Hansson to describe the philosophy of the Ruby on Rails web framework, but is related to earlier ideas like the concept of "sensible defaults" and the principle of least astonishment in user interface design.
This document describes how developers and contributors should write code. The reasoning of these styles and conventions is mainly for consistency, readability and maintainability reasons.

All working files (java, xml, others) should respect the following conventions:
§  License Header: Always add the current ASF license header in all versionned files.
§  Trailing Whitespaces: Remove all trailing whitespaces. If your are an Eclipse user, you could use the Anyedit Eclipse Plugin.
and the following style:
§  Indentation: Never use tabs!
§  Line wrapping: Always use a 120-column line width.
Note: The specific styles and conventions, listed in the next sections, could override these generic rules.
The Maven style for Java is mainly:
§  White space: One space after control statements and between arguments (i.e. if ( foo ) instead of if(foo))myFunc( foo, bar, baz ) instead of myFunc(foo,bar,baz)). No spaces after methods names (i.e. void myMethod(), myMethod( "foo" ))
§  Indentation: Always use 4 space indents and never use tabs!
§  Blocks: Always enclose with a new line brace.
§  Line wrapping: Always use a 120-column line width for Java code and Javadoc.
§  Readingness: Specify code grouping members, if needed. For instance in a Mojo class, you could have:
The following sections show how to set up the code style for Maven in IDEA and Eclipse. It is strongly preferred that patches use this style before they are applied.
Download maven-idea-codestyle.xml and copy it to ~/.IntelliJIDEA/config/codestyles then restart IDEA. On Windows, try C:\Documents and Settings<username>\.IntelliJIDEA\config\codestyles
After this, restart IDEA and open the settings to select the new code style.
Download maven-eclipse-codestyle.xml.
After this, select Window > Preferences, and open up the configuration for Java > Code Style > Code Formatter. Click on the button labeled Import... and select the file you downloaded. Give the style a name, and click OK.
For consistency reasons, our Java code convention is mainly:
§  Naming: Constants (i.e. static final members) values should always be in upper case. Using short, descriptive names for classes and methods.
§  Organization: Avoid using a lot of public inner classes. Prefer interfaces instead of default implementation.
§  Modifier: Avoid using final modifier on all member variables and arguments. Prefer using private or protected member instead of public member.
§  Exceptions: Throw meaningful exceptions to makes debugging and testing more easy.
§  Documentation: Document public interfaces well, i.e. all non-trivial public and protected functions should include Javadoc that indicates what it does. Note: it is an ongoing convention for the Maven Team.
§  Testing: All non-trivial public classes should include corresponding unit or IT tests.
TO BE DISCUSSED
The Maven style for XML files is mainly:
§  Indentation: Always use 2 space indents, unless you're wrapping a new XML tags line in which case you should indent 4 spaces.
§  Line Breaks: Always use a new line with indentation for complex XML types and no line break for simple XML types. Always use a new line to separate XML sections or blocks, for instance:
In some cases, adding comments could improve the readability of blocks, for instance:
or
No generic code convention exists yet for XML files.
The team has voted during the end of June 2008 to follow a specific POM convention to ordering POM elements. The consequence of this vote is that the Maven project descriptor is no more considered as the reference for the ordering.
The following is the recommended ordering for all Maven POM files:
Comments:
1.     The <project/> element is always on one line.
2.     The blocks are voluntary separated by a new line to improve the readingness.
3.     The dependencies in <dependencies/> and <dependencyManagement/> tags have no specific ordering. Developers are free to choose the ordering, but grouping dependencies by topics (like groupId i.e. org.apache.maven) is a good practice.
Note: There existing two alternativs to change order of a pom file Tidy Maven Plugin or the Sortpom Maven Plugin.
For consistency and readability reasons, XDOC files should respect:
§  Metadata: Always specify metadata in the <properties/> tag.
§  Sections: Always use a new line with indentation for <section/> tags.
For readability reasons, FML files should respect:
§  FAQ: Always use a new line with indentation for <faq/> tags.


Each of these build lifecycles is defined by a different list of build phases, wherein a build phase represents a stage in the lifecycle.
For example, the default lifecycle comprises of the following phases (for a complete list of the lifecycle phases, refer to the Lifecycle Reference):
§  validate - validate the project is correct and all necessary information is available
§  compile - compile the source code of the project
§  test - test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed
§  package - take the compiled code and package it in its distributable format, such as a JAR.
§  verify - run any checks on results of integration tests to ensure quality criteria are met
§  install - install the package into the local repository, for use as a dependency in other projects locally
§  deploy - done in the build environment, copies the final package to the remote repository for sharing with other developers and projects.
These lifecycle phases (plus the other lifecycle phases not shown here) are executed sequentially to complete the default lifecycle. Given the lifecycle phases above, this means that when the default lifecycle is used, Maven will first validate the project, then will try to compile the sources, run those against the tests, package the binaries (e.g. jar), run integration tests against that package, verify the integration tests, install the verified package to the local repository, then deploy the installed package to a remote repository.


Default (or Build) Lifecycle
This is the primary life cycle of Maven and is used to build the application. It has the following 21 phases.
There are few important concepts related to Maven Lifecycles, which are worth to mention
·      When a phase is called via Maven command, for example mvn compile, only phases up to and including that phase will execute.
·      Different maven goals will be bound to different phases of Maven lifecycle depending upon the type of packaging (JAR / WAR / EAR).
In the following example, we will attach maven-antrun-plugin:run goal to few of the phases of Build lifecycle. This will allow us to echo text messages displaying the phases of the lifecycle.
We've updated pom.xml in C:\MVN\project folder.
<project xmlns = "http://maven.apache.org/POM/4.0.0"
   xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0
   http://maven.apache.org/xsd/maven-4.0.0.xsd">
   <modelVersion>4.0.0</modelVersion>
   <groupId>com.companyname.projectgroup</groupId>
   <artifactId>project</artifactId>
   <version>1.0</version>
   <build>
      <plugins>
         <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-antrun-plugin</artifactId>
            <version>1.1</version>
            <executions>
               <execution>
                  <id>id.validate</id>
                  <phase>validate</phase>
                  <goals>
                     <goal>run</goal>
                  </goals>
                  <configuration>
                     <tasks>
                        <echo>validate phase</echo>
                     </tasks>
                  </configuration>
               </execution>
           
               <execution>
                  <id>id.compile</id>
                  <phase>compile</phase>
                  <goals>
                     <goal>run</goal>
                  </goals>
                  <configuration>
                     <tasks>
                        <echo>compile phase</echo>
                     </tasks>
                  </configuration>
               </execution>
           
               <execution>
                  <id>id.test</id>
                  <phase>test</phase>
                  <goals>
                     <goal>run</goal>
                  </goals>
                  <configuration>
                     <tasks>
                        <echo>test phase</echo>
                     </tasks>
                  </configuration>
               </execution>
           
               <execution>
                  <id>id.package</id>
                  <phase>package</phase>
                  <goals>
                     <goal>run</goal>
                  </goals>
                  <configuration>
                     <tasks>
                        <echo>package phase</echo>
                     </tasks>
                  </configuration>
               </execution>
           
               <execution>
                  <id>id.deploy</id>
                  <phase>deploy</phase>
                  <goals>
                     <goal>run</goal>
                  </goals>
                  <configuration>
                     <tasks>
                        <echo>deploy phase</echo>
                     </tasks>
                  </configuration>
               </execution>
            </executions>
         </plugin>
      </plugins>
   </build>
</project>
Now open command console, go the folder containing pom.xml and execute the following mvn command.
C:\MVN\project>mvn compile
Maven will start processing and display phases of build life cycle up to the compile phase.
[INFO] Scanning for projects...
[INFO] -----------------------------------------------------------------
-
[INFO] Building Unnamed - com.companyname.projectgroup:project:jar:1.0
[INFO] task-segment: [compile]
[INFO] -----------------------------------------------------------------
-
[INFO] [antrun:run {execution: id.validate}]
[INFO] Executing tasks
[echo] validate phase
[INFO] Executed tasks
[INFO] [resources:resources {execution: default-resources}]
[WARNING] Using platform encoding (Cp1252 actually) to copy filtered
resources,
i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory
C:\MVN\project\src\main\resources
[INFO] [compiler:compile {execution: default-compile}]
[INFO] Nothing to compile - all classes are up to date
[INFO] [antrun:run {execution: id.compile}]
[INFO] Executing tasks
[echo] compile phase
[INFO] Executed tasks
[INFO] -----------------------------------------------------------------
-
[INFO] BUILD SUCCESSFUL
[INFO] -----------------------------------------------------------------
-
[INFO] Total time: 2 seconds
[INFO] Finished at: Sat Jul 07 20:18:25 IST 2012
[INFO] Final Memory: 7M/64M
[INFO] -----------------------------------------------------------------
-


A Build profile is a set of configuration values, which can be used to set or override default values of Maven build. Using a build profile, you can customize build for different environments such as Production v/s Development environments.
Profiles are specified in pom.xml file using its activeProfiles/profiles elements and are triggered in variety of ways. Profiles modify the POM at build time, and are used to give parameters different target environments (for example, the path of the database server in the development, testing, and production environments).
Build profiles are majorly of three types.

 4. Maven Goal 
Each phase is a sequence of goals, and each goal is responsible for a specific task.
When we run a phase all goals bound to this phase are executed in order.
Here are some of the phases and default goals bound to them:
We can list all goals bound to a specific phase and their plugins using the command:
For example, to list all goals bound to the compile phase, we can run:
And get the sample output:
Which, as mentioned above, means the compile goal from compiler plugin is bound to the compile phase.


The build lifecycle is simple enough to use, but when you are constructing a Maven build for a project, how do you go about assigning tasks to each of those build phases?
The first, and most common way, is to set the packaging for your project via the equally named POM element <packaging>. Some of the valid packaging values are jarwarear and pom. If no packaging value has been specified, it will default to jar.
Each packaging contains a list of goals to bind to a particular phase. For example, the jar packaging will bind the following goals to build phases of the default lifecycle.
This is an almost standard set of bindings; however, some packagings handle them differently. For example, a project that is purely metadata (packaging value is pom) only binds goals to the install and deployphases (for a complete list of goal-to-build-phase bindings of some of the packaging types, refer to the Lifecycle Reference).
Note that for some packaging types to be available, you may also need to include a particular plugin in the <build> section of your POM and specify <extensions>true</extensions> for that plugin. One example of a plugin that requires this is the Plexus plugin, which provides a plexus-application and plexus-service packaging.

The second way to add goals to phases is to configure plugins in your project. Plugins are artifacts that provide goals to Maven. Furthermore, a plugin may have one or more goals wherein each goal represents a capability of that plugin. For example, the Compiler plugin has two goals: compile and testCompile. The former compiles the source code of your main code, while the latter compiles the source code of your test code.
As you will see in the later sections, plugins can contain information that indicates which lifecycle phase to bind a goal to. Note that adding the plugin on its own is not enough information - you must also specify the goals you want to run as part of your build.
The goals that are configured will be added to the goals already bound to the lifecycle from the packaging selected. If more than one goal is bound to a particular phase, the order used is that those from the packaging are executed first, followed by those configured in the POM. Note that you can use the <executions> element to gain more control over the order of particular goals.
For example, the Modello plugin binds by default its goal modello:java to the generate-sources phase (Note: The modello:java goal generates Java source codes). So to use the Modello plugin and have it generate sources from a model and incorporate that into the build, you would add the following to your POM in the <plugins> section of <build>:
You might be wondering why that <executions> element is there. That is so that you can run the same goal multiple times with different configuration if needed. Separate executions can also be given an ID so that during inheritance or the application of profiles you can control whether goal configuration is merged or turned into an additional execution.
When multiple executions are given that match a particular phase, they are executed in the order specified in the POM, with inherited executions running first.
Now, in the case of modello:java, it only makes sense in the generate-sources phase. But some goals can be used in more than one phase, and there may not be a sensible default. For those, you can specify the phase yourself. For example, let's say you have a goal display:time that echos the current time to the commandline, and you want it to run in the process-test-resources phase to indicate when the tests were started. This would be configured like so:
 R is the leading analytics tool
Tableau Public is a free software
 Python is an object-oriented scripting language which is easy to
Sas is a programming environment
The University of California,

1.     R Programming

R is the leading analytics tool in the industry and widely used for statistics and data modeling. It can easily manipulate your data and present in different ways. It has exceeded SAS in many ways like capacity of data, performance and outcome. R compiles and runs on a wide variety of platforms viz -UNIX, Windows and MacOS. It has 11,556 packages and allows you to browse the packages by categories. R also provides tools to automatically install all packages as per user requirement, which can also be well assembled with Big data.

2. Tableau Public:


Tableau Public is a free software that connects any data source be it corporate Data Warehouse, Microsoft Excel or web-based data, and creates data visualizations, maps, dashboards etc. with real-time updates presenting on web. They can also be shared through social media or with the client. It allows the access to download the file in different formats. If you want to see the power of tableau, then we must have very good data source. Tableaus Big Data capabilities makes them important and one can analyze and visualize data better than any other data visualization software in the market.

3.Python


Python is an object-oriented scripting language which is easy to read, write, maintain and is a free open source tool. It was developed by Guido van Rossum in late 1980s which supports both functional and structured programming methods. Phython is easy to learn as it is very similar to JavaScript, Ruby, and PHP. Also, Python has very good machine learning libraries viz. Scikitlearn, Theano, Tensorflow and Keras. Another important feature of Python is that it can be assembled on any platform like SQL server, a MongoDB database or JSON. Python can also handle text data very well.

4. SAS:

Sas is a programming environment and language for data manipulation and a leader in analytics, developed by the SAS Institute in 1966 and further developed in 1980s and 1990s. SAS is easily accessible, managable and can analyze data from any sources. SAS introduced a large set of products in 2011 for customer intelligence and numerous SAS modules for web, social media and marketing analytics that is widely used for profiling customers and prospects. It can also predict their behaviors, manage, and optimize communications.

5. Apache Spark

The University of California, Berkeleys AMP Lab, developed Apache in 2009. Apache Spark is a fast large-scale data processing engine and executes applications in Hadoop clusters 100 times faster in memory and 10 times faster on disk. Spark is built on data science and its concept makes data science effortless. Spark is also popular for data pipelines and machine learning models development. Spark also includes a library MLlib, that provides a progressive set of machine algorithms for repetitive data science techniques like Classification, Regression, Collaborative Filtering, Clustering, etc.

No comments:

Post a Comment

Client Side Development 2- RIWAs

                   A rich Internet application (RIA) is a web application that is designed to deliver the same features and features tha...