This article in Russian: Краткое руководство по эксплуатации |
In the guide below, you can find useful information on applications included in Hydra Billing. To learn more about the application interaction please see Application Interaction (API).
Web applications that interact with Hydra are based on the Ruby on Rails framework (the Ruby programming language). They function under the rails OS user account with the help of the Apache web server together with the passenger module (or with the help of nginx + unicorn). New versions of web applications are set up on the server by HBS engineers at their own discretion.
Each application is set up inside a separate directory (for example, The Service Provider Console — inside /opt/hydra/rails/arm_isp
, The Customer Self-Care Portal — inside /opt/hydra/rails/arm_private_office
, and so on), which has the following contents:
releases
— a directory that contains releases (versions) of applications. Such a directory is used for storing all versions of a corresponding application to be set up on the server. In case of an unsuccessful update, it gives an opportunity to roll back to a previous version.
current
— a symbolic link to the directory that contains the version of the application that is currently in use. You should use this link to set up a path to the application in the settings of a virtual host inside the settings of a web server.
shared
— a directory that contains all the general settings for the application that are used regardless of releases. This directory, for example, is used to store temporary files and logs generated during the application run.
shared/log/production.log
file, so that over a long production period it may grow up to several GB. To ensure having available disk space at all times, you should regularly rotate and archive this log, for example, with the help of logrotate.releases
directory. To ensure having available disk space at all times, you should regularly delete obsolete releases, for example, one month after their last use.Cache flushing
To speed up processing requests, web applications generate internal cache in RAM during their runs. In some cases, for example, when a link to the current release is changed, you should flush the cache to ensure stable application running. You can flush the cache by executing the following command from the application's directory:
|
Launch, stop and restart
In most cases, you manage web applications with the help of the /etc/init.d/apache Apache init script. For example, the Apache restart command can be as follows:
|
You may need to restart a separate application, for example, when it becomes unavailable for all users.
All agents use the Python programming language and have similar ways of setting up and managing their execution. In production mode, all of them run in a daemon mode and are managed with the help of individual init scripts (for example, /etc/init.d/hcd — for hcd
). Each application has a separate config file with certain launch and logging parameters. See an example for hard (the /etc/hard/hard.conf file
):
|
Rotating and archiving logs
During its run, the application logs data within an individual file, and the path to it is specified in config (for example, in /var/log/hard/hard.log — for hard
). All agents have built-in functionality for rotating a log file. When setting up parameters responsible for rotating (log rotate size
и log rotate count
), you should pay attention to the rate at which a log file is filled in while in production mode. Use it to adjust necessary settings in config. A log file should contain data collected at least over one last week of the application run.
Rotating the MongoDB log
Agent hard uses the MongoDB object database which, during its run, logs data in the /var/log/mongodb/mongodb.log file. Over a long production period, the file may grow up to several GB. To ensure having enough available disk space at all times, you should regularly rotate and archive this log, for example, with the help of logrotate.
|
Flushing cache for hard
In some cases, agent hard (Autonomous RADIUS daemon) requires flushing cache of the internal MongoDB object database, which is used for data caching, You can flush the cache with the help of the following commands:
|
Launch, stop and restart
You can manage agent applications with the help of individual init scripts (for example, /etc/init.d/hpd — for hpd). Restart command is as follows:
|
SNMP data collector, similar to agents, has its own config file /etc/hsnmp/hsnmp.conf where its operation and logging parameters are set up.
During its run, the collector logs data within an individual file, and the path to it is specified in config. Similar to agents, hsnmp has built-in functionality for rotating a log file. When setting up parameters for rotating (log rotate size
and log rotate count
) you should pay attention to the rate at which a log file is filled in while in production mode. Use this rate to adjust necessary settings in config. A log file should contain data collected at least over one last week of the application run.
Files containing the Database data are typically located within the /var/oradata
directory on the server. This directory stores tablespace files (the main ones are HYDRA and HYDRA_INDEX), redo logs, and auxiliary data files. During its run, the DBMS database creates database log files and tracing files inside the /opt/oracle/admin
and /opt/oracle/diag/rdbms
directories.
Rotating and archiving database log files and tracing files
Over the production period, the DB creates a large number of active log files and tracing files inside the /opt/oracle/admin
и /opt/oracle/diag/rdbms
directories, so that they take up a large part of the disk space. To avoid it, you should regularly delete obsolete files, for example, one month after their creation. In case you have a standby server please make sure to rotate the /opt/hydra/oracle/logs/update.log file.
Rotating and archiving logs for working with external tables логов работы с внешними таблицами
When working with external tables, Oracle logs the results for each table into a separate file. External tables are used, for example, for loading accounting details into the main DB by Agent HARD (starting from version 4.0). We recommend setting up rotating for these logs and periodically deleting obsolete files. For example, rotating can be performed daily, and you should consider deleting files that were changed more than one month ago.
You can get the path to the file directory with the help of the following request:
|
When working with the HYDRA_TMP_DIR
directory please note that logs for working with external tables are stored only within files of the *.log
type, so that deleting or changing other types of files will lead to the software bugs.
Launch, stop, restart
You can manage the operation of databases set up on the server with the help of the /etc/init.d/ora.database
init script. For example, the restart command is as follows:
|
When the amount of unaccounted traffic data within the system reaches large numbers (see the EX_TRAFFIC_COLLECT_C
table, and check if the number of lines is over 2 mln.), you should clear the data with the help of the following SQL request:
|
Make sure to perform this request under the AIS_NET
user.
A large number of records inside this table means there is a large amount of unaccounted traffic. You should sort out this issue (see the Hydra Billing User Guide for the corresponding manual), or increase the threshold for unaccounted traffic registered over one session for collecting the data off the collector.
The «ORA-01555: snapshot too old» error in the task for a DB schema analysis
Over the production period, there can be errors while running the task for the DB analysis. Typically, such errors are caused by the insufficient size of the rollback DB segment. For example:
|
To resolve this issue, you need to check the current size of the rollback DB segment with the help of the following request:
|
If the undo_retention_optimal value is much greater than the undo_retention value (i.e. the delta parameter is negative), you should execute the following command:
|
where instead of the $undo_retention_optimal variable you should insert the figures as per the corresponding segment value you have previously defined.
This request is to be executed under the AIS_NET
user.