Marc Bojoly posts

Archi & Techno

Cloud Ready Applications

Days of traditional servers are counted. Flexibility of cloud platforms towards traditional datacenters is the root cause of the shift. For a developer, the question is no more where will be my application deployed but on which cloud platform. And designing an application in the cloud is more complex than on premise. As developers we must be ready for that, we must write code ready for the cloud. But, what is exactly a cloud ready application? Is it a one push action to Heroku? Yes,…

Read more
Archi & Techno

SQLFire from the trenches

In a first article, I have explained why I think that NewSQL is a disruptive storage technology designed for traditional Information Systems. NewSQL relies on a scalable architecture and is designed to run on commodity hardware. In order to get actual figures for SQLFire, we have built a Proof of Concept for stress test purposes. The goal of this article is to give you some feedback on these stress tests in the chosen scenario.

Read more
Archi & Techno

Let’s dig into SQLFire

A year ago, I promised in a previous (French) article to test the ability to migrate a standard Hibernate/SGBRD application to a NewSQL technology. It is now time to give you the results of our investigation. Don't worry I will first sum up this previous article and explain why I strongly believe that NewSQL is an important subject. Then I will present the hypothesis of our POC. And finally I will give you the results of this POC, our conclusions about what we will do…

Read more
Archi & Techno

Audit with JPA: creation and update date

When writing a business application with persistent data, some auditing capabilities are often required. Today, state of the art for persisting data involves using an ORM tool through the JPA interface. Being able to add two columns containing the creation date and the update date is a common auditing requirement. My colleague Borémi and I have had to answer this question. We have grouped and studied several implementations already used by other Octos. In order to help you choose the best tool for such need,…

Read more
Archi & Techno

My reading of Percolator architecture: a Google search engine component

In April 2010, Google updated its indexing system. Caffeine - the name of this project - was pretty transparent for the large public but represents an in depth change for Google. It does not directly improve the search page, like instant search, but the indexing mechanism, the way to provide pertinent search results. For the end user, this change allows reducing the delay between when a page is founded and when it is made available in the Google search. Google has recently published a research…

Read more
Archi & Techno

Using Hadoop for Value At Risk calculation Part 6

In the first part, I described the potential interest of using Hadoop for Value At Risk calculation in order to analyze intermediate results. In the three (2,3, 4) next parts I have detailled how to implement the VAR calculation with Hadoop. Then in the fifth part, I have studied how to analyse the intermediate results with Hive. I will finally give you now some performance figures on Hadoop and compare them with GridGain ones. According to those figures, I will detail some performance key points…

Read more
Archi & Techno

Using Hadoop for Value At Risk calculation Part 5

In the first part of this series, I have introduced why Hadoop framework could be useful to compute the VAR and analyze intermediate values. In the second part and third part and fourth part I have given two concrete implementations of VAR calculation with Hadoop with optimizations. Another interest of using Hadoop for Value At Risk calculation is the ability to analyse the intermediate values inside Hadoop through Hive. This is the goal of this (smaller) part of this series.

Read more
Archi & Techno

Using Hadoop for Value At Risk calculation Part 4

In the first part of this series, I have introduced why Hadoop framework could be useful to compute the VAR and analyze intermediate values. In the second part and in the third part I have given two concrete implementations of VAR calculation with Hadoop. I will now give you some details about the optimizations used in those implementations.

Read more
Archi & Techno

Using Hadoop for Value At Risk calculation Part 3

In the first part of this series, I have introduced why Hadoop framework could be useful to compute the VAR and analyze intermediate values. In the second part I have described a first implementation. One drawback of this previous implementation is that it does not take advantage of the reduce pattern. I did it by hand. I will now fully use Hadoop reduce feature.

Read more