Error validating database
imposes restrictions on the model, to ensure (for example) that the model will save data that matches the corresponding database table.
A model can validate data before passing it on to a data store such as a database to ensure that it conforms to the backend schema.
The network is, after all, one of the slowest components of the system and it is shared by everyone, so the less traffic, the better response times for all.
One implication of having the validation in both the database and the user interface (and possibly the middleware too) is that the same logic is expressed in more than one place.
The frequency can vary depending on the size of your system and the number of changes you make to users, resources, and groups.
I am trying to register my enterprise geodatabase with the GIS server.
One way to validate data is to create a model schema; Loop Back will then ensure that data conforms to that schema definition. The following code defines a schema and assigns it to the product model.
With multiple layers of validation we also face the challenge that the code will be written in multiple different programming languages using different programming methods; declarative in the database, object oriented in the middle ware and either procedural or functional in the user interface (depending in what style you choose to write your Java Script).
Of course neither this method, nor N-Version programming can detect when all the versions are in error.
Also the decision algorithm could have bugs in it, so though in some ways I find this approach to safety critical systems reassuring, it is far from foolproof (of course I am not going to think about this the next time I step into an airliner).
The database level validation is necessary, but not sufficient for building effective and usable systems.
Of course, when you are updating data external to your organization (say placing an order over web service), then clearly the handling of errors raised by the web service must be handled in an application.