Home | CHANGELOG | VHP Portal | DEV Portal |
VAPI Core
seriously look at logging. Currently have good logging through the handler (so server communications), but are not saving it to anywhere. Really the logging needs to made available and also allow for logging to more than one location. As it sits logging is connected a bit like the upstreams in the config file. If we were to take this section and allow for multiple objects in config.logging, it may open this up to serving multiple locations. All of the logging should route to the same place (so the upstream will be the same for all logging request) so we can set one upstream config.
if / when we use this package for other servers we have the following issue. The first core server uses the second spot in the uri (position 3 in url array? -> vhpportal.com/api/ROUTE) and calls it the route. If the route equals an upstream, the url is sent to that core server, who will use the same position in the uri. It does not leave room for the second core server to have more than one route. Options
The VAPI-CORE is meant to create a server that supports custom controllers and/or reverse proxy to other services. The package can be found behind all VHP servers in the system and will standardize the underlying code of these servers. It has built-in:
Once running, the server accepts messages and immediately starts the middle-ware. In this process, the middle-ware checks whether the request is to an up-stream in which it is handled by an upstream handler, or starts validating the request and preparing the handler config data. If at any point middle-ware fails the chain is stopped, the error is logged, the handler is not create, and the requester notified.
If middle-ware is successful an internal handler is created from the prepared information and passed to the Router. It is here the handler’s request information is used to match it to a Controller. It should be noted that up-streams and controllers are referred to using the same value in the URL. Since the up-streams are tested in middle-ware, up-streams will be picked first. Ensure there are no naming conflicts between your controllers and up-streams.
The server can be modified through a config file. This file is passed to the CoreServer upon creation. In this file will be the following:
This part is not required, but recommended. Create a:
A controller takes the form of a class. On the index file you must initialize each controller and the export it. Below is one way to do it, but this is the only good opportunity to do things with the controller before it is attached to the server, so take your time. We use the … operator to spread the routes of each controller flat in the export. It is important to mind the naming of controllers.
example of /controllers/index.js
let lib = require('./tools/index.js');
let ProjectsController = require('./ProjectsController.js');
let SupportController = require('./SupportController.js');
let TrackingController = require('./TrackingController.js');
module.exports = {
...new ProjectsController(lib),
...new SupportController(lib),
...new TrackingController(lib)
}
The core can be started by requiring vapi-core-server, and initializing the class. The constructor accepts ‘config’, ‘type’, ‘controls’, and ‘use’ as parameters.
let CoreServer = require('vhp-core-server')
let core = new CoreServer({
config:require('./config/config.json'),
type:String,
controls:require('./'),
midwares:Object,
})
After this the server is running. To know this, check your command line and you should see the following message
Core Server listening: 5000
Once started and listening, the server can receive requests.
The core server is the base of the package and responsible for:
* @param {Object} config
* @param {String} type
* @param {Object} controls
* @param {Object} use
*/
constructor({
type,
config,
controls={},
midwares=null
})
{
controllers: {
GetProfile: {
route: () => {},
models: ['userProfile'],
scheme: 'GetProfile'
}
},
models: {
userProfile: (info,server) => {}
},
request_schemes: {
GetProfile: {
name: 'person',
strict: true,
scheme: {
name: { type: 'String', desc: 'the persons name' }
}
}
}
}
What each part of the controls are gone into further down in Controllers section. If controls is not passed the server will start with its default controllers (PING). During setup the models and request_schemes are added first. These are then available when the controllers are setup. Controllers are trimmed of any models and schemes that do not match anything registered on the server. The setup will not block the controller from being registered, but they may not act as they should.
{
middleware1:() => {}
middleware2:() => {}
}
Middle-ware in the core works by taking a set of functions with param(s)(handler) and running them before the handler is passed to the router. RUNmidware acts recursively, and is responsible for running the chain on each request and managing the handler data. All functions have access to the properties, but will not be allowed to add custom properties. Each time back through the RUNmidware function the handler is checked and cleaned if necessary.
@TODO - are not doing any checks on handler except for errors.
{
req: Object -> request object
res: Object -> response object
data: Object -> request body
errors: Object -> any errors picked up along the way
}
If at any point in the chain there is an error, RUNmidware will catch it and respond to the requester with the error message(s) provided by that middle-ware function. At the end of the chain this handler data is registered with the server using the handler class declared by the “type” property in config.
The core comes with the following middle-ware installed
Adding of middle-ware happens first at the construction of the CoreServer. The order in which they are listed is the order they are processed, ‘middleware1’ before ‘middleware2’. Duplicate names are not allowed and during setup the second name is not added to the list. All of the list will be added after Models (or Dev if server in dev mode). It is not possible to change location of the list, but it is possible to override any defaults.
To do it you must have a matching name in your list. Setup will prefer your middle-ware over the default. Since there are no duplicates the new order is whatever is left in the list.
Function Rules
{
example:(handler) => {
return new Promise((resolve,reject) => {
//do stuff
return resolve(handler);
})
}
}
Handlers are built into the server and are unmodifiable. They act as a wrapper for the request and a constant for that request throughout the connection. Attached it has abilities to:
constructor({
req,
res,
info,
pack,
connection
})
if the above exists, the are attached to the handler and made available. In addition, the following are created and made available.
After setting up the above configuration, the handler should fall into one of the following ‘modes’. 1) handle request to upstream 2) handle request to internal routes
In the first, the server will create a special handler in middle-ware if the route passed matches any upstream. This allows request who do not need this server to skip further setup. During handler setup pass a connection to notify the handler it will be routing this request upstream. it will reduce the amount of over head in the handler. The only functions used in this ‘mode’ are:
A handler who is not routing upstream will be setup at the end of successful middle-ware, and provide communication to outside servers (via type’s base module. i.e. http node module) as well as easier access to req and res objects. In our case, a route handler will rarely need to access to other servers as that is what the models are for, and you would just setup and upstream. In cases where function is needed, the following can be used:
All types are built on the base class of CoreHandler(). As server needs grow we can add more types. For ease of use, the available function names are the same across handlers. This means in an https or http handler you would call request to reach a server only the arguments required to make them work.
This handler is built using the http module native to node. Much of the inner workings are better explained there.
({
url = null,
body = '',
addoptions = null,
addheaders = {},
method = 'POST'
})
(req, res, options)=>{}
Models (in the context of vapi-core-server) are objects requested by a controller in order to complete a route. At its most basic, a model would query the database (as an upstream) to get an item. There is no requirement to query an upstream. Maybe the model returns a static structure to be used. Point being, the model is free for the user to “pre-load” needed data.
Model Structure:
model = (handler,server) => {
return new Promise((resolve,reject)=>{
return resolve({ success: <BOOLEAN>, model: <OBJECT> })
})
}
It is important for the model to return success as that is how the server knows to include the model with the request. If success is false, the server will not include model, and the model will not be there for the route.
Models can be accessed by name in handler.models[
Errors are organized as a fail one, fail all. This type of failure is not based on the success of the model response. This would be in case of code failure, or some other ‘unintended’ failure. These errors are handled outside of the developers control, and will print to the screen with the title ‘PACK MODELS ERROR >
Models will get passed to the server in the controls object like this:
controls: {
controllers: {},
models: {
model1: ()=>{},
model2: ()=>{}
}
request_schemes: {}
}
Needed models can then be referenced in the controllers:
controller: {
route: ()=>{},
models: [ 'model1', 'model2' ],
scheme: ''
}
For example, we will use the the route ‘GETanalytics’. It will return metrics on consultant quotes. The actual route will loop through a list of quotes and perform these metrics. We can use models here to gather the data to run metrics. We need to gather:
‘settings350’, for that department
‘quotesBYconsultant’, based on the consultant
Each model will receive the handler setup info being built by middle-ware. With this the model can use whatever it needs to attempt to load the needed information.
In our example, settings350 does not need any input. It will be setup to query for the 350 settings. If one wanted to broaden the use of the model they could have the model look for a ‘dept’ property in the handler.pack, and the model could be renamed as ‘settings’.
For quotesBYconsultant, we will need some input. most obviously we could look into the pack for a ‘estimator’ property. Alternatively, if the user name is used, we could access the handler.auth.user for the user name. This is not to say you should do that, only to point out the possibility.
Most of the time the model will be looking into the handler.pack object for the required query info. This is fine so long as you are aware of the naming and the different models / routes using the same pack. So for example the request pack for our above fake route is:
{
estimators: Array
}
If we were going to use our “clever” model ‘settings’, we would also need to include in our pack a ‘dept’ property. So long as everything ‘downstream’ of it does not use a different type of ‘dept’ property everything would be fine, the downstream could just ‘reuse’ dept.
Something more annoying would be the different uses of id (more likely). Take the above models and act like they both require a pack.id to query. Again, fine so long as the id is the same across the two models. Like our empid is used in both devices and accounts collections. If they required two different ids then the models would have to be adjusted, maybe the pack has a pack.quote.id property used. Otherwise the two models can not be used together properly.
Another way to approach this modeling would be to combine the two models into one, and match the naming to the controllers name. Now upon request the controller will ask for one model that returns the settings and consultant(s) quotes. As a best practice, this is is the recommended way to use models. If you desired to reuse models, do it. It is not unacceptable for a model to be setup to use another model.
Take the above, again. We would setup the following models:
In a place with access to the above functions (imported or whatever) you would have your more specific ‘GETanalytics’ model where you would call the above inside, wait on both response, and return all models.
Schemes are JSON objects to describe requirements for incoming request data. The following is the structure for a request_scheme:
{
name: String,
strict: Boolean,
scheme: {
id: { type: String, default: 'value', desc: 'property description' }
customer : { type: Object, desc: 'Customer properties', scheme: subscheme }
}
}
Scheme is the actual object that runs through the validation. Anything on the outside will describe the scheme, and how it behaves as a whole. Within the scheme you can find individual descriptors per property. Type is simply the expected variable type of the property. The available types are anything allowed by JSON. Default will hold a value to default to if the property is not present in the request. Not setting a default will tell the validation function that that property is not required at all. To set the property as required, but necessitating an actual requested value, the default can be set to ‘NEED’. Desc is a verbose description of the property. Properties have to contain a type, but do not need a default or description. Strict is used to allow outside properties in the scheme. A FALSE value allows outside properties, but does not relax the enforcement on properties within the scheme object. In a situation where the whole request needs to be “un-strict”, pass {} to scheme and put strict to false. Currently the approach is to make a full data structure for a set of schemes, i.e. a Project Scheme that mocks the full data structure of a Project object. This can then be referenced by the actual schemes which will often only require a very small number of properties. For instance, the GETproject scheme would only pull in the ID property of the full Project Scheme.
Nested schemes can be created by setting a project to type = Object and adding a SCHEME property that holds the sub-scheme object. Sub-schemes are constructed exactly like regular schemes. During validation the checker will check all levels of both the request object and scheme object. To ensure the validation properly checks the subscheme, it and its parent property need to be fully set on the scheme.
{
name: String,
strict: Boolean,
scheme: {
estimator: { ...fullscheme.estimator, default: 'NEED' },
customer: {
...fullscheme.customer,
scheme: {
name: { ...subscheme.name, default: 'NEED' }
}
}
}
}
Controllers are initialized through the controls argument and can be added after if needed. They are the way to actually make the server do what you want. A full controller will have the following structure.
{
name: String,
route: Function,
models: Array,
scheme: String
}
Models and scheme are optional, but if used they need to reference a model or request_scheme registered with the server (passed through controls)
It is not important for there to be a prep pack attached to a route. Take the default PING route, there is no scheme to pass or a model to gather. If a custom route does not need prep simply excluded the models and scheme values, OR set them to null. In either case they can be successfully excluded. Of course it is recommended that every route have a matching scheme to ensure the route does not hit avoidable failure.
Second is the route which has the following guidelines:
options.query must be a single object; will not accept an array options.doc may be either a single object or an array
Remove:{ good: RESULT > { acknowledged: true, deletedCount: 1 } bad (doc does not exist): RESULT > { acknowledged: true, deletedCount: 0 }
(array): ::NO RESPONSE:: }
Update:{ good: RESULT > { acknowledged: true, modifiedCount: 1, upsertedId: null, upsertedCount: 0, matchedCount: 1 } “failed” (doc already updated): RESULT > { acknowledged: true, modifiedCount: 0, upsertedId: null, upsertedCount: 0, matchedCount: 1 } “insert” (doc to update does not exist): { acknowledged: true, modifiedCount: 0, upsertedId: new ObjectId(“645109b33df32fe6f0826fbf”), upsertedCount: 1, matchedCount: 0 } }
Find:{ good: RESULT > [ { _id: new ObjectId(“64498ece88c3044762686fd0”), empID: ‘07’, fName: ‘First’, lName: ‘Last’, tasks: [], goals: [], __v: 0 } ] bad (doc does not exist): RESULT > []
}
Insert:{ good (single doc): RESULT > [ { empID: ‘07’, fName: ‘test’, lName: ‘guy’, tasks: [], goals: [], _id: new ObjectId(“64498dd492bbc6fc70b8cec2”), __v: 0 } ]
good (array of docs): RESULT > [ { empID: ‘08’, fName: ‘test’, lName: ‘guy’, tasks: [], goals: [], _id: new ObjectId(“644a759d6d689ac841293942”), __v: 0 }, { empID: ‘09’, fName: ‘test’, lName: ‘guy’, tasks: [], goals: [], _id: new ObjectId(“644a759d6d689ac841293943”), __v: 0 }, { empID: ‘10’, fName: ‘test’, lName: ‘guy’, tasks: [], goals: [], _id: new ObjectId(“644a759d6d689ac841293944”), __v: 0 } ]
bad (doc already exists): ::NO RESPONSE:: MongoBulkWriteError
bad (one doc exists in array): ::NO RESPONSE:: }