Questions on the purpose of the framework

Sep 26, 2009 at 12:34 PM

I was intrigued by your framework and have a couple questions because I'm guessing I'm thinking about using the framework outside of it intended purpose. My thoughts were heading towards the database replacement side.

-- Is the primary purpose of the framework to handle automatic graph integrity and validation within an object model rather than being though of as a replacement for a traditional database?

-- Am I right in thinking that when committing a transaction and saving data that the entire object graph is serialized back to the data store? In other words, there is no way to write just a small change even if I modified only one integer field, for example?

I ask because I have a small project, maybe a dozen objects in the domain model, that I would normally map to a database. I though that maybe ObjectLounge would allow me to completely skip the db mapping part. But I wonder at the usefulness for this application because of the need to write the entire object graph each time I make a field change.

One thought I had was that while my domain object graph does contain cycles I could easily pick a "master object" for a "layer." (I'm just making up terms here.) Could I then write changes, one layer per database row, as smaller chunks? But there would have to be a mechanism to handle duplicate objects, like your weak references to PostCategory in the sample, now spanning multiple rows.

(ObjectLounge makes me think of chillout music ... listening to SofaSpace.net right now.)

Sep 26, 2009 at 10:30 PM

Hi,

thanks for the questions - they are really good ones!

-- Is the primary purpose of the framework to handle automatic graph integrity and validation within an object model rather than being though of as a replacement for a traditional database?

Both - it is there to handle graph / referential integrity and validation (which are in traditional databases usually expressed in contraints, but validations are far more powerful constraints), but it also handles persistence and concurrency-stuff like isolation. THEREFORE it can be used as a replacement for a traditional database.

A common client/server usecase in our projects is this one: We host a doman model (== a singleton instance of the context class) in a process. Clients access the model (data as wel as functionalities) through a service facade, in our case typically WCF.

Just some basic thoughts, propably you think the same: We also used different ORMs, but we came to one conclusion: If you _really_ take it serious with object orientation in a business app, nobody except your domain model is allowed to touch / modify the data that is persisted inside the "store". In other words: If somebody wants to modify data, it has to go the way through the domain model (which encapsulates all the business logic and therefore all the constraints that make your system). So: Why do we make all the mapping from a relational model to an object oriented model? In many cases, this is just unneccessary, and that's the reason why we wanted to have a technology that allows us to just focus on the model, and nothing else (I agree that there are some BI tools which can work well with relational databases, but IMHO this is an offline process anyway).

-- Am I right in thinking that when committing a transaction and saving data that the entire object graph is serialized back to the data store? In other words, there is no way to write just a small change even if I modified only one integer field, for example?

Fortunately, you are wrong ;) The framework keeps track of the instances that have to be updated, inserted or deleted and execute only that "Unit of Work" when committing a transaction (committing a transaction also consequences a persist). Of course, the hierarchy is taken in consideration, too. Depending on how you modeled (using Aggregation and Composition), it may be that removing an entity from a composition which itself composites a lot of children, the entity and also it's children are removed from the store (if no other entity in the store points to them via an aggregation - then you could not remove that entity). Maybe this sounds a little bit complicated, but if you use the framework, you don't have to think about that facts very often.

You can also take a look at out integration tests, maybe they give you a little insight of how the framework works. Otherwise, just ask :)

One thought I had was that while my domain object graph does contain cycles I could easily pick a "master object" for a "layer." (I'm just making up terms here.) Could I then write changes, one layer per database row, as smaller chunks? But there would have to be a mechanism to handle duplicate objects, like your weak references to PostCategory in the sample, now spanning multiple rows.

I think I don't understand completely what you mean. Could you explain that a little bit more in detail?

Cheers!

Sep 28, 2009 at 11:22 AM

I understand more now that I've poked around the code a bit. Thanks for steering me in the right direction. I don't see the need for the change I mentioned, you're unit of work already takes care of the problem. I'm going to give ObjectLounge a try for my application, a small desktop (winforms) application that manages some club (organizations) information. I'm going to use SQLite as the provider. It looks like that provider will be almost exactly the same as the CE provider you already have.

I got to wondering about using ObjectLounge with a web app. Just thoughts at this point. Do you know if there are any problems running this in medium trust? I'm on a shared hosting site. 

This also got me thinking about multiple users and handling updates. Any thoughts for handling an object changed by another user when writing back to the DB? I can envision an optional datetime or timestamp field in the underlying data store: {ID, Data, LastChangeDate}. But I'm sure there are lots of ways to handle this and you've probably already though of a few possibilities that integrate well with your model.

Finally, when I ran the unit tests a number of them failed. I'm guessing that's because of the transition from older versions of the code base up to the 0.1.3 version but figured I'd pass it along in case.

 

Sep 29, 2009 at 8:40 AM

It's nice that you wanna give it a try, and I would be happy will give you support.

I'm going to use SQLite as the provider. It looks like that provider will be almost exactly the same as the CE provider you already have.

Yes, I think so, too. Especially because of that we only needed some kind of "store" and didn't want to use the filesystem. If you have the SyncProvider implemented, maybe you could committ it to the project?

Do you know if there are any problems running this in medium trust?

Unfortunately, we didn't have this scenario yet. If you want, I could provide you with a very small ASP.Net application and you give it a try?

This also got me thinking about multiple users and handling updates...

I think this issue is a "general" concurrency issue (as far as I understand you correctly). There are many ways to handle this, depending on what exactly the scenario is. Basically, the framework already has some concurrency handling mechanism implemented, which are: "Read Committed" (only committed values can be read from another access) and something like a "Serialized Write" (which means here that if A tries to write a property of an instance which is already touched by B, A gets an exception). All this refers to "direct" concurrency, means two or more transactions are running simultaniousely.But maybe this is not what you mean, so: There are also other scenarios where you have multiple clients, and if one changed the state, all other clients have an old state from now on. There are also many ways to solve this, in general this rule might help:

You have to build your model using validations, and all constraints must be expressed in them!

Validations can be used for "state-based" rules (like "this field must be greater than zero") or for "transitional" rules (like: "only transitions from C to D are allowed, not from C to A"). In our ValidationAttribute, you can therefore throw exceptions immediately (mostly used for transitional) or wait until the whole transaction is finished (for state-based).

An example: You can set the state of an invoice from "new" to "payed" to "closed". If a client (A) thinks an order is "new" and wants to set it to "payed", but another client has set it to "payed" and then to "closed", there must be a validation rule that checks this when A want to transition to "payed". The rule will then throw an exception to the client and all the changes made from within this transaction will be rolled back. I hope this helped, otherwise let me know if you need some more support. For most scenarios, I think using a "last updated" field or something should not be neccessary.

Finally, when I ran the unit tests a number of them failed. I'm guessing that's because of the transition from older versions of the code base up to the 0.1.3 version but figured I'd pass it along in case.

You guessed right :) Sorry for this. We already have some tests for future features, too,  which we already wrote. Maybe for the next release, I try to exclude these tests before releasing.