A couple of videos with me are now available on the web.
The first one was taken at TheServerSide Java Symposium panel discussion on performance, in which i joined John Davies as the moderator, and panelists Bob Lozano, Founder & Chief Strategist of Appistry,
Gil Tene, CTO and co-founder of Azul Systems, and Kirk Pepperdine. The second video is a ServerSide interview with me by Joe Ottinger on a variety of topics related to GigaSpaces, scalability, Amazon EC2, utility computing and our new Developer Contest. I hope you'll find it entertaining.
Joe: Good afternoon Nati. We are here with Nati Shalom, CTO of
GigaSpaces and we would like to talk a little bit about scalability.
Nati, scalability has really kind of picked up in terms of how many
people are talking about it and where they are talking about it. Why is
it such a hot topic today?
Nati Shalom: Well, it is actually a good question. I have been
wondering about that question for a while and I can explain it in…I
think there are different trends that are converging in this space and
it is depending on the sector itself. If you look at the financial
sector, you will see that moving to electronic trading and more
real-time, front office application and back office applications are
starting to be more closely in terms of their need to process more
analytics type of operations during the day and not overnight and
producing reports, during real-time, any real-time. All that is a big
driver for scalability in this sector. Web 2.0 is obviously another big
sector that is booming these days and because it is booming and because
it is moving again to a more real-time type of experience and more rich
web that also drives a lot of scalability and we are not talking about
the ‘Read-only’ kind of web, it is a read/write web as some would call,
as all those trends are converging and basically leads to a more
real-time experience in general, and that real-time experience changes
a lot of the requirements in terms of volumes of data that needs to be
addressed, in terms of the modern concurrent users and in terms of the
amount of throughput that needs to be pushed through the same systems.
So all that, different dimensions are creating a demand for
scalability.
Joe: Well, you mentioned Web 2.0 and the financial markets in
particular, are not those different definitions of scalability, one
dealing with data throughput on the back-end, and the other one
focusing more on interactivity with a web server?
Nati Shalom: To a
degree I would say yes, but again, we do see a convergence. So, if you
look at…with the way web starts today, then the web is more read
scalability I would say, number of concurrent users, number of hits per
second, that is where the web stands today when the financial sector,
you will see that scalability really is the amount of data as you
rightly mentioned. But, the trend is changing because for example, take
e-commerce as an example. E-commerce is very similar in terms of
behaviors to electronic trading or to trading in general. So, that
becomes less of a ‘Read-only’, it is a ‘read/write’ type of operation,
very similar to the financial sector that we are seeing, the same with
Telco. The rich clients like in the more sophisticated phone that we
have today, obviously if you look at the iPhone and all the smartphone
that we are seeing are creating a different behavior and different
requirements. It is not just pure voice, it is a pure rich media, a
real-time media. So, they both start from different angles and
different dimension of scalability but in general if you look at that
from a perspective of time, you would see that they all hit almost all
dimension of scalability at the end as they will need both the data
scalability, the concurrent user scalability, the throughput
scalability, all of that dimension at the end, but they do start from
different points.
Joe: Well, let us look at GigaSpaces in particular. What technologies does GigaSpaces offer to enhance scalability?
Nati Shalom: I think that the evolution of GigaSpaces actually started
when I was involved in building a B2B exchange, and that B2B exchange
we recently or we very immediately kind of found ourselves at a point
where we found that with current middleware based approach, we hit the
scalability wall and we cannot really scale and we found ourselves
building a lot of infrastructure round that. So GigaSpaces, we came up
with that background and the first thing that we wanted to address is
really what I call the main bottleneck, or the first bottleneck that
you will hit when you start to deal with scalability. So, the first
part that GigaSpaces deal with is the scalability of the data tier.
Then later on we started to deal with not just with the scalability of
the data but with the scalability of the entire application and that
ends with two flavors of our product. One is called GigaSpaces XAP, an
eXtreme Application Platform, which is the equivalent of an application
server to a degree. It is based on Spring. It actually leverages Spring
as a development framework and it introduces concepts like an
SLA-driven container to address the scalability of the application
across multiple machines and deal with the operations side of
scalability and the other flavor of the product is called GigaSpaces
Enterprise Data Grid. The Enterprise Data Grid is the equivalent of a
database of today. That piece deals with the scalability of data,
removing the data bottlenecks. The two additions are combined into that
same product set and is provided as two flavors of that same product
set.
Joe: Well, so if your product replaces or can serve as a layer
between an application and the database, what is the purpose of a
database or a file system or a messaging service, if you are using
GigaSpaces as a platform?
Nati Shalom: Well, I would say that
databases particularly was used in the past for different purposes. One
of them is durability, persistency, reliability etc. more than it was
designed to. Databases, for example, are good for persistency and for
recording and for doing a large set of queries in a data warehouse
etc., and using them also for in-flight transaction was a misuse of the
technology in my view. So, the main difference in that regard is that
we really put, whether it is a database or messaging system, in the
right place, and where leveraging things that were not available in the
past like memory availability, the fact that we could still…we allowed
the data in memory and putting a lot of that characteristic, that
transaction characteristic into memory, enable us to push the database
effectively a step behind in the food chain of transaction processing
or application processing. So, the role of the database in today’s
system would be to be the kind of the durable storage where you would
run the reporting systems, and where you would look out for data that
is a long-lived data, where the in-memory storage would be used for the
in-flight transactions and to a degree the system of records that the
application interact with. The same goes with messaging and with file
system. I would say that again the main difference in that regard that
we are seeing is that all those different technology were built with
the notion of a tier-based model, with the notion that you could split
the problem by dealing with in separate tiers and gluing them together
at the application context where we see that all of those things,
especially when you add scalability to it, need to interact with each
other. On their context I would set an application and have very high
dependency, a very high runtime dependency from a latency perspective,
failover perspective, scalability perspective, and that is why we are
coming with a fundamental different approach on how those things needs
to be built, and in our approach they all need to be part of the same
cloud, same virtual cloud and should not be built as separate tiers.
So, that is I would say the main difference with our approach to those
different technologies.
Joe: Well, let us look at scalability as a whole. What are the
common principles that you would use to solve this scalability issue?
How do you determine there is a scalability problem and when it is
determined how can you fix it?
Nati Shalom: Yeah, that is
actually a very good point. What we are seeing is that today, people do
not do an application for scalability at the get go, and the reason why
it is very complex. I would say the most obvious bottleneck that most
people hit is obviously the database. So, you start to throw more
throughput into the application, you start to run more users that tries
to access the same application and one of the symptoms that you will
see is latency, slow hit rate into a website, slow response if you were
building a website. If you are building a transaction system like in
the financial world what you will see is that you are limited by the
amount of throughput that you could push through the system. So, no
matter what you will do, you will hit the maximum CPU utilization, the
maximum disk utilization and you will hit the wall there. You will not
be able to meet the requirements of your applications or the
requirements of the business from a throughput perspective. These are
the symptoms that you would normally see. People start to analyze why
and where the bottleneck is and when they look at the…if you look at
how transactions are flowing into the system you will see that the most
obvious bottleneck as I mentioned would be the database for example,
and through basic analysis you know that, that is really the
bottleneck, the way to address it is really replacing it by something
faster, and that I would say the most common problem that we are seeing
on how people address scalability. What they soon realize by doing that
is that what they get at the end is that they just improve scalability.
They do not really deal with scalability. They improve the throughput
of the system by lowering the amount of I/O or write to the database
and that means that they would not be able to meet the ideal goal,
which is the definition of scalability, in my view, at least linear
scalability in my view, which is the ability to increase the
throughput, to increase the hit rate on my website, to increase the
amount of data that I can load without changing my code, that is the
key principle and I can do that if I really meet the scalability
requirements. I can deal with all these different demands just by
adding more units without changing anything and in some cases even
during runtime. So, just by solving the data bottleneck would not get
me to that point, but I would say that most people start from that
point.
Joe: Well, you said you improved scalability without changing code
but if you are using something, if you are adding an architectural
element, is not that by definition changing your code?
Nati
Shalom: To a degree I would say yes, and for that there is an evolution
path. If you would like it is an evolution to a revolution, which
basically means that what you are doing, you do not have to change the
architecture at day 1 to get to the scalability. So, if you have an
existing application there would be several principles that you will
need to go through and the principles would be obviously to deal with
the data bottlenecks and then the application bottlenecks and the
messaging bottlenecks, and you do not have to deal with all of that at
day 1. Dealing with the data bottleneck could be done without changing
the architecture and with a very slight change in the code. If you are
using abstruction at the data layer that would be only the change in
the data tier abstraction. In Spring that would be the DAO. So, that
would be the only change in your code. It would not be an architectural
change. It would be an optimization, it could be replacing a data
access, one data access with another data access. If you are using
Hibernate there is even another step that it could take you, which is
using a Hibernate second level cache. That obviously will not get you
to the linear scalability but it would improve the throughput of your
system, which in a way would improve the scalability. The second level
would be getting to the linear scalability. That step would require an
architectural change, and regardless, I mean if you look at what eBay
is doing, what Amazon is doing, a Google, etc., they have to go to that
step no matter which solution you are looking at, whether it is
GigaSpaces or something else or they do it on their own, and it is
mostly related to the fact that you need to go through a partitioning
model, moving from a central kind of a model to a partition model in
which you split the data as well as the transaction and the business
logic into multiple units, multiple self-sufficient units and then,
only then, once you have that self-sufficient unit you can start and
scale linearly. So, you would have to go through that architectural
change if you would need to meet the scalability requirements. That
architectural change would need to go through regardless of the product
that you are using. What we are doing in GigaSpaces is reducing the
amount of code change that you will need to go through, through a set
of abstractions. So, for example if you are using JMS we are just
replacing the runtime implementation of the JMS into an implementation
that fit into that partition model, the same goes with the data and we
implement different abstraction in Spring for example, like the
declarative transaction, like remoting, all those things can abstract
the actual code from the runtime environment. So, in that case we push
most of the change, not all of that, but most of the change into the
runtime middleware rather than to the application code. Having said
that, again, you would need to be aware of what is going on behind the
scenes. You would need to know that partitioning has an implications on
how the applications is running, and how the data needs to be
distributed and that is a step that you need to go through if you want
to reach linear scalability in any case regardless of the solution that
you are dealing with.
Joe: One of the things that you kind of mentioned fired up an idea.
Has anyone just considered actually building a web server on something
like GigaSpaces where a front end simply passes back a request and then
a GigaSpaces deployment actually builds the response?
Nati Shalom: Well that is actually an idea that some of our
customers have been using and that is something that we will be
probably introducing into our product in a later release. We are
actually working on something on that line these days. So, if you will
see GigaSpaces and the product today you will see that we are already
supporting the deployment of the web tier if you would like as part of
our containers, and the next thing would be to virtualize that so that
they would look like a single host, the single web server from an end
user perspective. That is, we are not there yet, but we are very close
to that.
Joe: That is fascinating. Well, let us talk a little
bit about extreme transaction processing. Can you define for me exactly
what that is with relation with everything else we have been talking
about and can you tell me what role GigaSpaces plays in it?
Nati Shalom: Yeah definitely. To answer that question, we need to
go kind of a step backward. If you look at transactions and how they
evolved, transaction started from the mainframes back in 70s and TP
monitors, then later you managed with Encino and Tuxedo that BEA
bought, and later on J2EE with OTS and CORBA if you would like
implemented or specified a standard way to deal with transaction
processing. The model itself was a model that is built on something
that is called two-phase commit and basically the model itself, lends
itself to a centralized kind of a coordinator. The way you deal with
consistency between different systems in a distributed environment, you
go through a certain place or a central place and that place is
responsible for coordinating all the transaction of all the units. Now,
everyone that deals with scalability, very early at the process realize
that, that approach is contradicted with scalability, because it
creates that central point of synchronization where no matter how much
you split the application into multiple units, partition it, if you at
the end go to a central place for every operation and every
transaction, that would basically block the scalability of the
application because you were not really utilizing all those different
to show in the environment. That realization…I think there was a paper
by Pat Helland from Amazon, previously Microsoft but now in Amazon, and
there has been a lot of talks in recent conferences by eBay and Amazon
on the fact that you need to live today in a world without distributed
transaction because of that bottleneck and there are different patterns
on how to implement consistency without distributive transaction and I
would say more specifically without the XA transaction or without the
centralized approach of transaction. There are still transactions in
place. They are mostly local, local to partition but they are not the
classic two-phase commit that we have seen in the tier-based model or
the J2EE model. XTP is trying to define a transaction semantics that
will fit into that new age of scalable, distributed, high performance,
low-latency environment. In a way we could look at that as the third
generation transaction model, and that is pretty much I would say the
definition behind XTP. There is yet not a standard behind it. There is
no specification that defines what it is, really is. It is mostly a
pattern and different application, different platform, different
vendors, code whatever they do in XTP and basically you would see that
everyone is right to a degree.
GigaSpaces started from looking
at XTP as something that again emerged from that old two-phase commit
model, but we have taken few steps further and look at how we deal with
transaction processing in a stateful environment, high performance and
low latency, and GigaSpaces XTP tries to deal with consistency without
putting the burden on the application developers itself. So, we deal
with transactions without two-phase commit but still in a way that will
deal with consistency through the different application components and
that is what GigaSpaces application platform or XTP application
platform is all about. It is the ability to provide consistency and
scalability without compromising one over the other and more
importantly without adding complexity to your application code like
most of the other solutions or patterns are introducing.
Joe: Now, GigaSpaces is based loosely around the JavaSpaces architecture model, correct?
Nati Shalom: Yes.
Joe: Well, can you summarize what the JavaSpaces model is for us
and just to kind of go back over it and also kind of mention how
JavaSpaces works with JINI?
Nati Shalom: Yes. That is…it is very relevant to what I said
earlier because JavaSpaces was used in a JINI environment as an
information and a messaging cloud. Some would call it a coordination
system. So, it was initially built to coordinate both data
synchronization and event synchronization between distributed services,
and I would say that the simplest way to describe it as a technology
that combines the world of messaging and data all together, and that is
what the value of the space really brings into the picture. JINI is a
service environment, a service-oriented environment, a Java-based
service environment, which adds a discovery model, transaction
semantics and some of the semantics on how you build services in a
distributed environment. So again, JINI is a service-oriented
environment mostly dealing with discovery and how discovery protocols
between services and how services publish and discover themselves.
JavaSpaces is used as, if you would like, the cloud that enables those
services to coordinate data, events, and operations between themselves.
Joe: Well, let us talk about something that is related. Now, Amazon has
the elastic computing cloud and also the simple storage process. Does
GigaSpaces work with those?
Nati Shalom: Yes. It is actually a very interesting thing. I
strongly believe in the world in which we will be moving towards a more
utility-based model, a service model, service model not just as a
component within my architecture but as a business model. We have seen
that happening in different industries like CRM, with sales force, and
I believe that we are going to see that in the middleware world as
well. Today, the experience that we are seeing is that in order to
evaluate or build your application, you need to download the product,
try it on your own machines, then you need to go to your IT and ask
them for let us say, 10 machines or some machines to just play around
and build your first prototype of your application, that takes time and
you need to go through a lot of process just to get to that point. By
using something some thing like the Amazon EC2 we actually can change
all that experience. Instead of downloading a product and tried it, we
just get a handle to a product that is already installed and is
pre-setup in that type of environment. Instead of going to IT, all we
need to say to that type of utility-based farm, how many instances we
want to run at each particular time. What type of hardware
configuration we want to use and for how long time? And we will get
access to those environments spontaneously without going through any
hurdle of doing that manual operations and all those bureaucracy that
we normally have and by that we could have a full-blown cluster and
application model running in a matter of minutes. That is where we see
the combination of GigaSpaces and Amazon Elastic Computing fit in very
nicely. So, actually a different application that I would say fit into
that model very nicely. For example, if we look at analytics, analytics
happened to be a type of application that is very, I would say resource
intensive at a particular point of time. You need to load a lot of
data, run processing around it and then generate some sort of a report
like P&L or like reconciliation or any type of report that you can
think of. Once you have generated the report, in most cases you do not
really need those resources anymore and you can free them for others to
use it and that is pretty much it. It is classic to actually go instead
of buying all those machines to run those reports, subscribe into that
service, running that into these type of environment and then freeing
them once you finished with that without talking to any human or to
anyone of your IT operations. So that is how beautiful that model could
be. Obviously it is a vision. It is not there yet, but it is very close
to that.
Another application that could fit very nicely into
that model would be for example testing. How many times do we need to
run large benchmarks of our applications? Do we need those resources to
run our benchmark all the time? No. We need that for a particular point
of time where our project is ready for running that benchmark. Now,
today it is a very tedious work and we need to go again through a lot
of manual process and bureaucracies just to get to that point which
really lays a lot us at a point where we cannot even do that, imagine a
point in which you do not need all that interaction bureaucracy. You
just click a button and say “I want to run my application over 50
machines” and you get those 50 machines and it will run. Once you
finish you will free them up, and again no human interaction involved
on setting that up.
Joe: Well, let us look at another concept that kind of fits in the
same space in terms of MapReduce. Google uses MapReduce to generate
large data sets largely for searches and generates basically key value
pairs. How does GigaSpaces compare to search through a process like
that?
Nati Shalom: Well, first I think we need to understand why
they use MapReduce. The first, like when I talked about scalability, I
mentioned partitioning as one of the common problems, and partitioning
is obviously I think the guy from eBay in one of the presentation that
I have been, said that if you cannot partition your application, you
cannot scale it. Now, that is as strong as he defined it. The downside
or I would say the symptoms of partitioning is that you no longer have
the data at the same place. That is where MapReduce comes into play. It
comes to complement that piece, because the data is not in the same
place, you need to be able to run a parallel way of how you run the
queries and how you run queries for your data in those multiple file
system in the Google world. So, MapReduce and partitioning goes hands
in hands together and that is why you hear that almost as a synonym to
partitioning. In our world because we are not using file system to
store the data, we are using memory, we implement parallel query
pattern and the parallel query pattern enabled us to run queries
against our data grid instances if you would like. Now, that is an
interesting angle that once you start to go through that there is
nothing that prevents you not just to run a key value pair matching but
actually run a whole set of code as part of the query. So, you could
actually distribute code in parallel and run it in each partition and
leverage the fact that the code runs co-located with the data. The fact
that the data is in-memory gives you a very fast access by reference
and you could do things that you would not even imagine doing in terms
of performance and getting very complex queries implemented in that
way, even aggregated queries. So, in our case, the way we implemented a
MapReduce model is using a remoting abstraction and we call it a custom
query, and different ways on how you invoke that. You could invoke it
based on the query for example, if you will say, “I want to look only
at customer ID 1”. You could invoke it based on the query for example,
if you will say, “I want to look only at customer ID 1”. You would not
need to change your code for that. We will see that you are very
specific in your query and we will launch that query against the single
instance of the partition. Now, at another point of time you will say
“I want the sum of all the accounts of that customer” and it would
happen that all the accounts are spread across multiple partition are
not stored in a specific partition. In that case, we will run a
parallel query that will span across the partition, run that sum
locally at each partition, bring back all the partial result and then
it would be the reduced part of it. So, the reduce would be to take all
the partial results of each partition and aggregate them as the total
sum and that is what the end user will get, and you could be pretty
much in between the two worlds, you could say “Well, I want to run the
query on only on accounts 1, 2 and 3”, and then we will launch a
parallel query against partition 1, 2 and 3 and not the entire
partition. So, that is how flexible that could be. It could be launched
in one instance, it could be launched in multiple instances but the
general idea is that, it is designed to deal with the fact that data is
distributed and because data is distributed and you still want to
aggregate results from those distributed data resources, you need the
ability to run the two-phase queries. One that runs locally with the
data and one that is aggregated at the client and reduce the result if
you would like.
Joe: So, we have talked a lot about the concepts behind scalability
and space-based architecture and the ways you can achieve all these
fantastic results, but let us step into…behind the desk of a
programmer. How does one design a system to scale up? How does one
design to get the scalability without having to change their
architecture?
Nati Shalom: Yeah that is a question that I get a
lot once I start to introduce all those patterns. I mean people hear
about partitioning, no transactions, all those different things, they
start to freak out and say, “Oh, that is a nice concept but it is very
different than what I used and what I am doing today and how the hell
do I know that I am going to be successful by doing it. It is big risk
etc., etc.” Some have no choice and they hit the wall so they have to
go through that steps, but still some people would avoid doing it as
much as they can and what we are seeing as a result of that is and the
website are actually a very good example to look at, a lot of the big
website today including eBay and you know even Amazon etc. started with
a very simple architecture that did not thought about scalability at
the beginning and later on they had to go through a complete rewrite.
There is even a story/tale that probably happens to be true that eBay
was built out of a single DLL they allowed at the beginning, so you
will see and from what I have seen in the last conference is that a lot
of people, because all those concepts are complex, do not really deal
with scalability at the beginning and then go and figure that out later
at this stage because it is very complex and the need to do with
business logic first.
So, what you need to do in order to build the application to be
scalable, is something that we are trying very hard to simplify. By
simplifying is a meaning that we are trying to bring the programming
model of how you build the application pretty much to the same or
similar I would say programming model that you are used to today, so
from an API perspective you would steer right to the data source, you
would steer right and get messages through a JMS for example. If you
are using Spring you would write your application using Beans and
declarative manners, etc. So the programming model would look very
similar to what you are looking at, and again, from a programming model
we are trying to push most of the logic to the runtime rather than to
how you develop your application. Obviously, you will need to be aware
of what is going on beyond the scenes as I mentioned earlier when you
write the application but the code itself would not really be too much
aware of all those things that are happening behind the scenes. You
need to be aware mostly for performance, optimization and to really
take the best out of the underlying capabilities that you are getting
out of the infrastructure, but the big challenge is really to make it
transparent as possible. That is what we have invested a lot, a large
deal of our work in building up the component model around Spring, so
this component model really abstract how you get events, how you access
the data, how you deal with transaction and how you do that and that is
what enable us to push a lot of that logic and a lot of that changes
into the runtime environment.
Even in partitioning you do not
need to see in your code where the data fits and where it goes.
Basically what we do is we look at certain elements of the data that
you tell us to look at and we’ll deal with the partitioning based on,
let us say the hash code or the attribute of the data, so the first
element is that you do not have to know a whole lot of API and a whole
lot of code to build the application in a scalable way. What you do
need to know in terms of architecture is how to architect your
application such that it will run in the most optimal way in a scalable
environment and for that purpose you need to be aware of concept
regardless again of our solution or any other solution. Most of the
thing that you need to be aware of what it takes to build partitioning
or to partition your data and in a way once you crack that piece then
you are ready to go. The other piece that we provide from the
programming model to help you build your application in that type of
environment is called the processing unit. So the processing unit would
be the logical unit of code that you would write and then duplicate to
scale. This unit will include the data element, would include the
messaging element, would include the business logic and again that
would be the unit of failover and scale of your application. So the
development cycle would look like that. You first build a single
processing unit that run on a single machine, in your laptop, you write
it using Spring, the same way that you would write it almost in any,
even J2EE application or Spring application. You need to write it in a
way that it will be I would say partition aware but you do not have to
deal with any of that at that stage. Once you dealt with that, once you
got your business logic up and running and it is running on your
Eclipse, on your desktop and all the business logic is fixed, then the
next step would be to scale it out and scaling it out would not go
through any change in the code of that processing unit, it would be
just telling the system run more of those units and immediately what we
will do is we will glue them together to act like a virtual cloud so at
the end you would need to change any code from that single process that
run on the single machine in order to run it in a large cluster
environment. We will also help you to deal with how you deploy it and
how you run it and that is something that we added to Spring which is
called an SLA-driven container. So, the SLA-driven container would
abstract how you run it in a single machine and how you run it in a
huge cluster. You do not need to know about the host and the model
machines that are running out there, all you need to do is click on a
single deploy button and that single deploy button will deal with a
mapping of those instances into that type of environment.
So to sum that up, the programming model would look pretty much the
same as the programming model that you are using today. The runtime
environment is virtualized which really means that it is very
different. To get to the linear scalability the main thing that you
need to be aware of and almost the only thing that you need to be aware
of from an architecture perspective is how to partition your
application. If you already have an existing application, you do not
have to go even through that step at the first step, you can leave your
architecture the same way, it is perfectly fine to stop just by saying
okay, you know that I need to partition my application to scale but I
cannot do it right now. I just want to solve my bottleneck, data
bottleneck and just replace that piece and gradually add scalability to
the application and this way you can deal with the scalability
requirement of let us say the next year and later on move to the
partition model. What we will provide in that regard is the consistent
model which I call the three-step approach to move to those steps, so
you would not need to go through a whole re-architecture steps every
time that you are adding another step in your scalability roadmap if
you would like and that will enable customers and those that are either
having an existing application, tier-based application using Spring or
using J2EE and though they are building in a whole new application to
really start small and grow up to a whole Google-like cluster without
changing the code and without going through the step of redesign or
re-architect the application to meet the different scalability
requirement and that is the main message that I think we bring to the
table with our application, with our platform.
Joe: Excellent. What is next?
Nati Shalom: Well, the
future looks bright. I think that the main thing that I am seeing is
that a lot of customers, a lot of people in the world are starting to
realize that they need to go through that step so we are seeing a lot
of demand for the things that I just talking for almost seven years,
which is a very exciting point of time to be in. What we also realize
that a lot of those people are the innovative ones, startups for
example, and a lot of new ideas are actually come from those startup. I
mean if you think about it, Google was a startup not long ago, eBay is
the same, Yahoo was not too far ago. So what we found is that we do
want to as a startup ourself to embrace those targets and we launch a
whole new program which is called ‘The Startup Program’ that enables
those different startups to get and be able to scale the application
from day 1 and avoid going through this re-architecture phase. Once
they grow, they can really go through that step in a very smooth way
and that is where the startup program comes into place to really give
them also, not just to scale technically but to scale from pricing
perspective meaning that they will get the license free and only when
they grow their business and only when the business justify economic
wise they will be able to pay us the cost for the software.
Another aspect for that is the ease of use, making scalability
something that is easy as running your application in a single server.
That is our mission. That is our goal and we see the whole complexity
that people had to deal with today as a problem that needs to be solved
at the middleware stack not at the application layer stack. So, we are
doing a lot of work to simplify that not just through the thing that I
mentioned like the Spring abstraction and the SLA-driven container but
integrating that with the Elastic EC2, the Amazon EC2 and other
utility-based computing model. In addition to that, we will be adding
OSGi support to enable more dynamic type of application running in
these type of environment. So that is on the technical side.
Another thing that we are introducing from a community perspective is
we started a whole community process. The community process was really
to open up, we feel that we are hitting something that is much bigger
even than GigaSpaces. It is a whole new architecture that is going to
revolutionize a lot of the thing that we are used to and obviously we
cannot do it on our own and because of that what we are doing is
creating an ecosystem around the same concept to enable others to take
part of that revolution and contribute to that, add their ideas, add
their comments, all those type of things and for that purpose we
opened, we will be launching an OpenSpaces.org community website which
will be basically an open source community project that could host and
share ideas and contribute to another, and also we will be launching a
Killer Application contest that will enable people to really compete on
the best application that will take advantage of all the thing that we
discussed here and prove to the world what you could really do today
and how simple that could be by building a Google that could be as
simple as writing a single application today, you do not have to invest
too much in that, so that is where we are heading and as I said the
future looks bright.
Joe: Well that sounds really interesting and we at TheServerSide
look forward to seeing all these things come to fruition. Nati, I
really appreciate you taking the time to talk to us today.
Nati Shalom: Thank you very much for it Joe.
Joe: And we will be looking forward to seeing you very soon.
Nati Shalom: Thank you.
Recent Comments