Codebase as of c53e4aed26 as an initial commit for the shrunk repo

Signed-off-by: Kai Kreuzer <kai@openhab.org>
This commit is contained in:
Kai Kreuzer
2010-02-20 19:23:32 +01:00
committed by Kai Kreuzer
commit bbf1a7fd29
302 changed files with 29726 additions and 0 deletions

View File

@@ -0,0 +1,27 @@
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" output="target/classes" path="src/main/java">
<attributes>
<attribute name="optional" value="true"/>
<attribute name="maven.pomderived" value="true"/>
</attributes>
</classpathentry>
<classpathentry kind="src" output="target/test-classes" path="src/test/java">
<attributes>
<attribute name="test" value="true"/>
<attribute name="optional" value="true"/>
<attribute name="maven.pomderived" value="true"/>
</attributes>
</classpathentry>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-11">
<attributes>
<attribute name="maven.pomderived" value="true"/>
</attributes>
</classpathentry>
<classpathentry kind="con" path="org.eclipse.m2e.MAVEN2_CLASSPATH_CONTAINER">
<attributes>
<attribute name="maven.pomderived" value="true"/>
</attributes>
</classpathentry>
<classpathentry kind="output" path="target/classes"/>
</classpath>

View File

@@ -0,0 +1,23 @@
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>org.openhab.persistence.rrd4j</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.jdt.core.javabuilder</name>
<arguments>
</arguments>
</buildCommand>
<buildCommand>
<name>org.eclipse.m2e.core.maven2Builder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.jdt.core.javanature</nature>
<nature>org.eclipse.m2e.core.maven2Nature</nature>
</natures>
</projectDescription>

View File

@@ -0,0 +1,13 @@
This content is produced and maintained by the openHAB project.
* Project home: https://www.openhab.org
== Declared Project Licenses
This program and the accompanying materials are made available under the terms
of the Eclipse Public License 2.0 which is available at
https://www.eclipse.org/legal/epl-2.0/.
== Source Code
https://github.com/openhab/openhab-addons

View File

@@ -0,0 +1,233 @@
# rrd4j Persistence
The [rrd4j](https://github.com/rrd4j/rrd4j) persistence service is based on a round-robin database.
In contrast to a "normal" database such as db4o, a round-robin database does not grow in size - it has a fixed allocated size.
This is accomplished by saving a fixed amount of datapoints and by doing data compression, which means that the older the data is, the less values are available.
The data is kept in several "archives", each holding the data for its set timeframe at a defined level of granularity.
The starting point for all archives is the actually saved data sample (Item value).
So while you might store a sample value every minute for the last 8 hours, you might store the average per day for the last year.
This service cannot be directly queried, because of its data compression, which means that it cannot provide precise answers to all queries.
NOTE: rrd4j is for storing numerical data only.
It cannot store complex data types.
## Persistence Process
Round-robin databases (RRDs) have fixed length so called "archives" for storing values.
Think of an archive as a "drawer" with a fixed number of "storage boxes" in it.
The persistence service reads data "samples" from the openHAB core at regular intervals, and these are then put into the storage boxes.
Either a) the samples are stored singly directly into a box, or b) multiple samples are consolidated (using a consolidation function) into a box.
The service starts by storing samples in the leftmost box in the drawer.
Once the leftmost box is full, the service starts filling the next box to the right; and so on.
Once the rightmost box in the drawer is full, the leftmost box is emptied, the content of all boxes is moved one box to the left, and new content is added to the rightmost box.
An example is shown below.
Whereby the values indicated in the example may vary as chosen by the user..
- Samples are taken at intervals of `60` seconds
- They are consolidated by the `AVERAGE` function, over `10` samples, into boxes i.e. a box covers `10 X 60` seconds
- The full archive contains `250` boxes i.e. the archive/drawer covers `60 X 10 X 250` seconds
## Configuration
Two things must be done in order for an Item to get persisted:
1. it must have a persistence strategy defined in the `rrd4j.persist` file.
2. it must have a `datasource` defined as follows..
## Datasources
The database comprises at least one datasource.
The rrd4j service automatically creates one internal _**default**_ datasource for you.
Other datasources **may** be configured in addition, in the `services/rrd4j.cfg` file.
By default, if `services/rrd4j.cfg` does not exist, or if an Item is not explicitly listed in a `<dsName>.items` property value in it, then the respective Item will be persisted according to the [default datasource settings](#default-datasource).
By constrast if an Item **is** explicitly listed in a `<dsName>.items` property value, then it will be persisted according to those respective datasource settings.
Each datasource is defined by three property values (`def`, `archives`, `items`).
Whereby each `archives` property can comprise settings for one or more archives.
The various datasource property values are explained in the table below.
| Property | Description |
|---------------------|-------------|
| `<dsName>`.def | Definition of the range of sample values to be taken, and when. The format is `<dsType>,<heartBeat>,<minValue>,<maxValue>,<sampleInterval>` |
| `<dsName>`.archives | List of archives to be created. Each archive defines which subset of data samples shall be archived, and for how long. Consists of one or more archive entries separated by a ":" character. The format for one archive entry is `<consolidationFunction>,<xff>,<samplesPerBox>,<boxCount>` |
| `<dsName>`.items | List of Items whose values shall be sampled and stored in the archive. The format is `Item1,Item2` _**Note: the same Item is not allowed to be listed in more than one datasource!**_ |
For example..
```
ctr24h.def=COUNTER,900,0,U,60
ctr24h.archives=AVERAGE,0.5,1,480:AVERAGE,0.5,10,144
ctr24h.items=Item1,Item2
```
The description of the various datasource property elements is as follows:
### `<dsName>` (Datasource Name)
The name of the datasource.
It must be an alphanumeric string.
### `<dsType>` (Datasource Type)
Defines the type of data to be stored.
It must be one of the following string values:
- **COUNTER** represents a ever-incrementing value (historically this was used for packet counters or traffic counters on network interfaces, a typical home-automation application would be your electricity meter). If you store the values of this counter in a simple database and make a chart of that, you'll most likely see a nearly flat line, because the increments per time are small compared to the absolute value (e.g. your electricity meter reads 60567 kWh, and you add 0.5 kWh per hour, than your chart over the whole day will show 60567 at the start and 60579 at the end of your chart. That is nearly invisible. RRD4J helps you out and will display the difference from one stored value to the other (depending on the selected size). Please note that the persistence extensions will return difference instead of the actual values if you use this type; this especially leads to wrong values if you try to restoreOnStartup!
- **GAUGE** represents the reading of e.g. a temperature sensor. You'll see only small deviation over the day and your values will be within a small range, clearly visible within a chart.
- **ABSOLUTE** is like a counter, but RRD4J assumes that the counter is reset when the value is read. So these are basically the delta values between the reads.
- **DERIVE** is like a counter, but it can also decrease and therefore have a negative delta.
### `<heartBeat>` (Heart Beat)
The heartbeat parameter helps the database to detect missing values.
i.e. if no new value is stored after "heartBeat" seconds, the value is considered missing when charting.
It must be a positive integer value.
### `<minValue> / <maxValue>` (Minimum resp. Maximum Value)
These parameters define the range of acceptable sample values for that datasource.
They must be either:
- A numeric value, or
- The letter "U" (unlimited)
### `<sampleInterval>` (Sample Interval)
The time interval (seconds) between reading consecutive samples from the OpenHAB core.
It must be a positive integer value.
### `<consolidationFunction>` (Consolidation Function)
Determines the type of data compression to be used when more than one sample is to be stored in a single "storage box".
So if you use the `AVERAGE` function, and two samples of `20.0` and `21.0` are to be stored, then the value `20.5` would be stored in the box.
It must be one of the following strings:
- **AVERAGE** the average of all the samples is stored in the box
- **MIN** the lowest sample is stored in the box
- **MAX** the highest sample is stored in the box
- **LAST** the last sample is stored in the box
- **FIRST** the first sample is stored in the box
- **TOTAL** the sum of all samples is stored in the box
All archives of a datasource must use the same `<consolidationFunction>`.
### `<xff>` (X-files Factor)
Defines the maximum allowed proportion of data samples that are stored as NaN ("Not a Number") relative to the set number of `<samplesPerBox>`. In case this proportion is above the set value, NaN will be persisted instead of the consolidated value. Using 0.5 would require at least 50 percent of the data samples to hold a value other than NaN.
It must be a value between 0 and 1.
### `<samplesPerBox>` (Samples Per Box)
The number of consecutive data samples that will be consolidated to create a single entry ("storage box") in the database.
If `<samplesPerBox>` is greater than 1 then the samples will be consolidated into the "storage box" by means of the `<consolidationFunction>` described above.
The time span covered by a single "storage box" is therefore (`<sampleInterval>` x `<samplesPerBox>`) seconds.
It must be a positive integer value.
### `<boxCount>` (Box Count)
The number of "storage boxes" in the archive.
The time span covered by a full archive is therefore (`<sampleInterval>` x `<samplesPerBox>` x `<boxCount>`) seconds.
It must be a positive integer value.
### Multiple Possible Archives
As already said, each datasource can have one or more archives.
The purpose of having several archives is that it allows a different granularity of data storage over different timespans.
In the example below..
```
ctr24h.def=COUNTER,900,0,U,60
ctr24h.archives=AVERAGE,0.5,1,480:AVERAGE,0.5,10,144
ctr24h.items=Item1,Item2
```
The `ctr24.def` defines a datasource which is using a COUNTER, a `<hearBeat>` of 900 seconds, a `<minValue>` of 0, a `<maxValue>` of unlimited and a `<sampleInterval>` of 60 seconds.
The first archive entry in the `ctr24.archives` parameter has `480` boxes each containing `1` sample (or to be exact the `AVERAGE` of `1` sample).
So it covers `480 X 60` seconds of data (8 hours) at a granularity of one minute.
As a general rule the first archive (and maybe the only one) should have `<samplesPerBox> = 1` so that each sample is stored in one box.
And the second archive entry has `144` boxes each containing the `AVERAGE` of `10` samples.
So it covers `144 X 10 X 60` seconds of data (24 hours) at a granularity of ten minutes.
## Default Datasource
The service always automatically creates an internal default datasource with the properties below.
```
defaultNumeric.def=GAUGE,60,U,U,60
defaultNumeric.archives=AVERAGE,0.5,1,480:AVERAGE,0.5,4,360:AVERAGE,0.5,14,644:AVERAGE,0.5,60,720:AVERAGE,0.5,720,730:AVERAGE,0.5,10080,520
```
The default datasource type is GAUGE, the heartbeat is 60s, minimum and maximum values are unlimited, and the sample interval is 60s.
The default archives are:
| Archive | Boxes | Samples per Box | Period covered |
|:---------:|:---------:|:--------:|:-------------:|
| 1 | 480 | 1 | 8 hrs |
| 2 | 360 | 4 | 24 hrs |
| 3 | 644 | 14 | 6.26 days |
| 4 | 720 | 60 | 30 days |
| 5 | 730 | 720 | 365 days |
| 6 | 520 | 10080 | 10 years |
There is no `.items` parameter for the default datasource.
Implicitly this means that any Item with an allocated strategy in the `rrd4j.persist` file will be persisted using the above-mentioned default settings -
_**exception**:_ the Item is explicitly listed in the `.items` property value of a datasource in the `rrd4j.cfg` file.
---
## Examples
### `rrd4j.cfg` file
```
ctr24h.def=COUNTER,900,0,U,60
ctr24h.archives=AVERAGE,0.5,1,480:AVERAGE,0.5,10,144
ctr24h.items=Item1,Item2
ctr7d.def=COUNTER,900,0,U,60
ctr7d.archives=AVERAGE,0.5,1,480:AVERAGE,0.5,10,144:AVERAGE,0.5,60,672
ctr7d.items=Item3,Item4
```
### `rrd4j.persist` file:
```java
Strategies {
// for rrd charts, we need a cron strategy
everyMinute : "0 * * * * ?"
}
Items {
// persist items on every change and every minute
* : strategy = everyChange, everyMinute
}
```
**IMPORTANT:**
The strategy `everyMinute` (60 seconds) **must** be used, otherwise no data will be persisted (stored).
Other strategies can be used too.
---
## Troubleshooting
From time to time, you may find that if you change the Item type of a persisted data point, you may experience charting or other problems. To resolve this issue, remove the old `<item_name>`.rrd file in the `${openhab_home}/etc/rrd4j` folder or `/var/lib/openhab/persistence/rrd4j` folder for apt-get installed openHABs.
Restoring Item values after startup takes some time. Rules may already have started to run in parallel. Especially in rules that are started via the "System started" trigger, it may happen that the restore has not yet completed resulting in non-defined Item values. In these cases the use of restored Item values should be delayed by a couple of seconds. This delay has to be determined experimentally.

View File

@@ -0,0 +1,29 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.openhab.addons.bundles</groupId>
<artifactId>org.openhab.addons.reactor.bundles</artifactId>
<version>3.0.0-SNAPSHOT</version>
</parent>
<artifactId>org.openhab.persistence.rrd4j</artifactId>
<name>openHAB Add-ons :: Bundles :: Persistence Service :: RRD4j</name>
<properties>
<bnd.importpackage>!com.mongodb.*,!io.netty.*,!com.bea.*,!io.reactivex.*,!org.reactivestreams.*,!de.erichseifert.*,!org.w3c.*,!org.jvnet.*,!com.ctc.*,!com.sun.*,!com.sleepycat.*,!dagger.*,!org.codehaus.*,!org.glassfish.*,!com.ibm.*,!javax.xml.*,!net.sf.*,!nu.xom.*,!org.bson.*,!org.dom4j.*,!org.jdom.*,!org.jdom2.*,!org.kxml2.io.*,!org.xmlpull.*,!sun.*</bnd.importpackage>
</properties>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.rrd4j/rrd4j -->
<dependency>
<groupId>org.rrd4j</groupId>
<artifactId>rrd4j</artifactId>
<version>3.3.1</version>
</dependency>
</dependencies>
</project>

View File

@@ -0,0 +1,11 @@
<?xml version="1.0" encoding="UTF-8"?>
<features name="org.openhab.persistence.rrd4j-${project.version}" xmlns="http://karaf.apache.org/xmlns/features/v1.4.0">
<repository>mvn:org.openhab.core.features.karaf/org.openhab.core.features.karaf.openhab-core/${ohc.version}/xml/features</repository>
<feature name="openhab-persistence-rrd4j" description="RRD4j Persistence" version="${project.version}">
<feature>openhab-runtime-base</feature>
<bundle start-level="80">mvn:org.openhab.addons.bundles/org.openhab.persistence.rrd4j/${project.version}</bundle>
<configfile finalname="${openhab.conf}/services/rrd4j.cfg" override="false">mvn:${project.groupId}/openhab-addons-external3/${project.version}/cfg/rrd4j</configfile>
</feature>
</features>

View File

@@ -0,0 +1,58 @@
/**
* Copyright (c) 2010-2020 Contributors to the openHAB project
*
* See the NOTICE file(s) distributed with this work for additional
* information.
*
* This program and the accompanying materials are made available under the
* terms of the Eclipse Public License 2.0 which is available at
* http://www.eclipse.org/legal/epl-2.0
*
* SPDX-License-Identifier: EPL-2.0
*/
package org.openhab.persistence.rrd4j.internal;
import java.text.DateFormat;
import java.time.ZonedDateTime;
import org.openhab.core.persistence.HistoricItem;
import org.openhab.core.types.State;
/**
* This is a Java bean used to return historic items from a rrd4j database.
*
* @author Kai Kreuzer - Initial contribution
*
*/
public class RRD4jItem implements HistoricItem {
private final String name;
private final State state;
private final ZonedDateTime timestamp;
public RRD4jItem(String name, State state, ZonedDateTime timestamp) {
this.name = name;
this.state = state;
this.timestamp = timestamp;
}
@Override
public String getName() {
return name;
}
@Override
public State getState() {
return state;
}
@Override
public ZonedDateTime getTimestamp() {
return timestamp;
}
@Override
public String toString() {
return DateFormat.getDateTimeInstance().format(timestamp) + ": " + name + " -> " + state.toString();
}
}

View File

@@ -0,0 +1,610 @@
/**
* Copyright (c) 2010-2020 Contributors to the openHAB project
*
* See the NOTICE file(s) distributed with this work for additional
* information.
*
* This program and the accompanying materials are made available under the
* terms of the Eclipse Public License 2.0 which is available at
* http://www.eclipse.org/legal/epl-2.0
*
* SPDX-License-Identifier: EPL-2.0
*/
package org.openhab.persistence.rrd4j.internal;
import java.io.File;
import java.io.IOException;
import java.time.Instant;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Executors;
import java.util.concurrent.RejectedExecutionException;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledFuture;
import java.util.concurrent.TimeUnit;
import org.eclipse.jdt.annotation.NonNullByDefault;
import org.eclipse.jdt.annotation.Nullable;
import org.openhab.core.OpenHAB;
import org.openhab.core.common.NamedThreadFactory;
import org.openhab.core.items.Item;
import org.openhab.core.items.ItemNotFoundException;
import org.openhab.core.items.ItemRegistry;
import org.openhab.core.library.items.ContactItem;
import org.openhab.core.library.items.DimmerItem;
import org.openhab.core.library.items.NumberItem;
import org.openhab.core.library.items.RollershutterItem;
import org.openhab.core.library.items.SwitchItem;
import org.openhab.core.library.types.DecimalType;
import org.openhab.core.library.types.OnOffType;
import org.openhab.core.library.types.OpenClosedType;
import org.openhab.core.library.types.PercentType;
import org.openhab.core.persistence.FilterCriteria;
import org.openhab.core.persistence.FilterCriteria.Ordering;
import org.openhab.core.persistence.HistoricItem;
import org.openhab.core.persistence.PersistenceItemInfo;
import org.openhab.core.persistence.PersistenceService;
import org.openhab.core.persistence.QueryablePersistenceService;
import org.openhab.core.persistence.strategy.PersistenceCronStrategy;
import org.openhab.core.persistence.strategy.PersistenceStrategy;
import org.openhab.core.types.State;
import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import org.rrd4j.ConsolFun;
import org.rrd4j.DsType;
import org.rrd4j.core.FetchData;
import org.rrd4j.core.FetchRequest;
import org.rrd4j.core.RrdDb;
import org.rrd4j.core.RrdDef;
import org.rrd4j.core.Sample;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* This is the implementation of the RRD4j {@link PersistenceService}. To learn
* more about RRD4j please visit their
* <a href="https://github.com/rrd4j/rrd4j">website</a>.
*
* @author Kai Kreuzer - Initial contribution
* @author Jan N. Klug - some improvements
* @author Karel Goderis - remove TimerThread dependency
*/
@NonNullByDefault
@Component(service = { PersistenceService.class,
QueryablePersistenceService.class }, configurationPid = "org.openhab.rrd4j")
public class RRD4jPersistenceService implements QueryablePersistenceService {
private static final String DEFAULT_OTHER = "default_other";
private static final String DEFAULT_NUMERIC = "default_numeric";
private static final String DEFAULT_QUANTIFIABLE = "default_quantifiable";
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(3,
new NamedThreadFactory("RRD4j"));
private final Map<String, @Nullable RrdDefConfig> rrdDefs = new ConcurrentHashMap<>();
private static final String DATASOURCE_STATE = "state";
public static final String DB_FOLDER = getUserPersistenceDataFolder() + File.separator + "rrd4j";
private final Logger logger = LoggerFactory.getLogger(RRD4jPersistenceService.class);
private final Map<String, @Nullable ScheduledFuture<?>> scheduledJobs = new HashMap<>();
protected final ItemRegistry itemRegistry;
@Activate
public RRD4jPersistenceService(final @Reference ItemRegistry itemRegistry) {
this.itemRegistry = itemRegistry;
}
@Override
public String getId() {
return "rrd4j";
}
@Override
public String getLabel(@Nullable Locale locale) {
return "RRD4j";
}
@Override
public synchronized void store(final Item item, @Nullable final String alias) {
final String name = alias == null ? item.getName() : alias;
RrdDb db = getDB(name);
if (db != null) {
ConsolFun function = getConsolidationFunction(db);
long now = System.currentTimeMillis() / 1000;
if (function != ConsolFun.AVERAGE) {
try {
// we store the last value again, so that the value change
// in the database is not interpolated, but
// happens right at this spot
if (now - 1 > db.getLastUpdateTime()) {
// only do it if there is not already a value
double lastValue = db.getLastDatasourceValue(DATASOURCE_STATE);
if (!Double.isNaN(lastValue)) {
Sample sample = db.createSample();
sample.setTime(now - 1);
sample.setValue(DATASOURCE_STATE, lastValue);
sample.update();
logger.debug("Stored '{}' with state '{}' in rrd4j database (again)", name,
mapToState(lastValue, item.getName()));
}
}
} catch (IOException e) {
logger.debug("Error storing last value (again): {}", e.getMessage());
}
}
try {
Sample sample = db.createSample();
sample.setTime(now);
DecimalType state = item.getStateAs(DecimalType.class);
if (state != null) {
double value = state.toBigDecimal().doubleValue();
if (db.getDatasource(DATASOURCE_STATE).getType() == DsType.COUNTER) { // counter values must be
// adjusted by stepsize
value = value * db.getRrdDef().getStep();
}
sample.setValue(DATASOURCE_STATE, value);
sample.update();
logger.debug("Stored '{}' with state '{}' in rrd4j database", name, state);
}
} catch (IllegalArgumentException e) {
if (e.getMessage().contains("at least one second step is required")) {
// we try to store the value one second later
ScheduledFuture<?> job = scheduledJobs.get(name);
if (job != null) {
job.cancel(true);
scheduledJobs.remove(name);
}
job = scheduler.schedule(() -> store(item, name), 1, TimeUnit.SECONDS);
scheduledJobs.put(name, job);
} else {
logger.warn("Could not persist '{}' to rrd4j database: {}", name, e.getMessage());
}
} catch (Exception e) {
logger.warn("Could not persist '{}' to rrd4j database: {}", name, e.getMessage());
}
try {
db.close();
} catch (IOException e) {
logger.debug("Error closing rrd4j database: {}", e.getMessage());
}
}
}
@Override
public void store(Item item) {
store(item, null);
}
@Override
public Iterable<HistoricItem> query(FilterCriteria filter) {
String itemName = filter.getItemName();
RrdDb db = getDB(itemName);
if (db != null) {
ConsolFun consolidationFunction = getConsolidationFunction(db);
long start = 0L;
long end = filter.getEndDate() == null ? System.currentTimeMillis() / 1000
: filter.getEndDate().toInstant().getEpochSecond();
try {
if (filter.getBeginDate() == null) {
// as rrd goes back for years and gets more and more
// inaccurate, we only support descending order
// and a single return value
// if there is no begin date is given - this case is
// required specifically for the historicState()
// query, which we want to support
if (filter.getOrdering() == Ordering.DESCENDING && filter.getPageSize() == 1
&& filter.getPageNumber() == 0) {
if (filter.getEndDate() == null) {
// we are asked only for the most recent value!
double lastValue = db.getLastDatasourceValue(DATASOURCE_STATE);
if (!Double.isNaN(lastValue)) {
HistoricItem rrd4jItem = new RRD4jItem(itemName, mapToState(lastValue, itemName),
ZonedDateTime.ofInstant(
Instant.ofEpochMilli(db.getLastArchiveUpdateTime() * 1000),
ZoneId.systemDefault()));
return Collections.singletonList(rrd4jItem);
} else {
return Collections.emptyList();
}
} else {
start = end;
}
} else {
throw new UnsupportedOperationException("rrd4j does not allow querys without a begin date, "
+ "unless order is descending and a single value is requested");
}
} else {
start = filter.getBeginDate().toInstant().getEpochSecond();
}
FetchRequest request = db.createFetchRequest(consolidationFunction, start, end, 1);
List<HistoricItem> items = new ArrayList<>();
FetchData result = request.fetchData();
long ts = result.getFirstTimestamp();
long step = result.getRowCount() > 1 ? result.getStep() : 0;
for (double value : result.getValues(DATASOURCE_STATE)) {
if (!Double.isNaN(value) && (((ts >= start) && (ts <= end)) || (start == end))) {
RRD4jItem rrd4jItem = new RRD4jItem(itemName, mapToState(value, itemName),
ZonedDateTime.ofInstant(Instant.ofEpochMilli(ts * 1000), ZoneId.systemDefault()));
items.add(rrd4jItem);
}
ts += step;
}
return items;
} catch (IOException e) {
logger.warn("Could not query rrd4j database for item '{}': {}", itemName, e.getMessage());
}
}
return Collections.emptyList();
}
@Override
public Set<PersistenceItemInfo> getItemInfo() {
return Collections.emptySet();
}
protected @Nullable synchronized RrdDb getDB(String alias) {
RrdDb db = null;
File file = new File(DB_FOLDER + File.separator + alias + ".rrd");
try {
if (file.exists()) {
// recreate the RrdDb instance from the file
db = new RrdDb(file.getAbsolutePath());
} else {
File folder = new File(DB_FOLDER);
if (!folder.exists()) {
folder.mkdirs();
}
// create a new database file
db = new RrdDb(getRrdDef(alias, file));
}
} catch (IOException e) {
logger.error("Could not create rrd4j database file '{}': {}", file.getAbsolutePath(), e.getMessage());
} catch (RejectedExecutionException e) {
// this happens if the system is shut down
logger.debug("Could not create rrd4j database file '{}': {}", file.getAbsolutePath(), e.getMessage());
}
return db;
}
private @Nullable RrdDefConfig getRrdDefConfig(String itemName) {
RrdDefConfig useRdc = null;
for (Map.Entry<String, @Nullable RrdDefConfig> e : rrdDefs.entrySet()) {
// try to find special config
RrdDefConfig rdc = e.getValue();
if (rdc != null && rdc.appliesTo(itemName)) {
useRdc = rdc;
break;
}
}
if (useRdc == null) { // not defined, use defaults
try {
Item item = itemRegistry.getItem(itemName);
if (item instanceof NumberItem) {
NumberItem numberItem = (NumberItem) item;
return numberItem.getDimension() != null ? rrdDefs.get(DEFAULT_QUANTIFIABLE)
: rrdDefs.get(DEFAULT_NUMERIC);
}
} catch (ItemNotFoundException e) {
logger.debug("Could not find item '{}' in registry", itemName);
}
}
return rrdDefs.get(DEFAULT_OTHER);
}
private RrdDef getRrdDef(String itemName, File file) {
RrdDef rrdDef = new RrdDef(file.getAbsolutePath());
RrdDefConfig useRdc = getRrdDefConfig(itemName);
if (useRdc != null) {
rrdDef.setStep(useRdc.step);
rrdDef.setStartTime(System.currentTimeMillis() / 1000 - 1);
rrdDef.addDatasource(DATASOURCE_STATE, useRdc.dsType, useRdc.heartbeat, useRdc.min, useRdc.max);
for (RrdArchiveDef rad : useRdc.archives) {
rrdDef.addArchive(rad.fcn, rad.xff, rad.steps, rad.rows);
}
}
return rrdDef;
}
public ConsolFun getConsolidationFunction(RrdDb db) {
try {
return db.getRrdDef().getArcDefs()[0].getConsolFun();
} catch (IOException e) {
return ConsolFun.MAX;
}
}
private State mapToState(double value, String itemName) {
try {
Item item = itemRegistry.getItem(itemName);
if (item instanceof SwitchItem && !(item instanceof DimmerItem)) {
return value == 0.0d ? OnOffType.OFF : OnOffType.ON;
} else if (item instanceof ContactItem) {
return value == 0.0d ? OpenClosedType.CLOSED : OpenClosedType.OPEN;
} else if (item instanceof DimmerItem || item instanceof RollershutterItem) {
// make sure Items that need PercentTypes instead of DecimalTypes do receive the right information
return new PercentType((int) Math.round(value * 100));
}
} catch (ItemNotFoundException e) {
logger.debug("Could not find item '{}' in registry", itemName);
}
// just return a DecimalType as a fallback
return new DecimalType(value);
}
private static String getUserPersistenceDataFolder() {
return OpenHAB.getUserDataFolder() + File.separator + "persistence";
}
/**
* @{inheritDoc
*/
public void activate(final Map<String, Object> config) {
// add default configurations
RrdDefConfig defaultNumeric = new RrdDefConfig(DEFAULT_NUMERIC);
// use 10 seconds as a step size for numeric values and allow a 10 minute silence between updates
defaultNumeric.setDef("GAUGE,600,U,U,10");
// define 5 different boxes:
// 1. granularity of 10s for the last hour
// 2. granularity of 1m for the last week
// 3. granularity of 15m for the last year
// 4. granularity of 1h for the last 5 years
// 5. granularity of 1d for the last 10 years
defaultNumeric.addArchives("LAST,0.5,1,360:LAST,0.5,6,10080:LAST,0.5,90,36500:LAST,0.5,8640,3650");
rrdDefs.put(DEFAULT_NUMERIC, defaultNumeric);
RrdDefConfig defaultQuantifiable = new RrdDefConfig(DEFAULT_QUANTIFIABLE);
// use 10 seconds as a step size for numeric values and allow a 10 minute silence between updates
defaultQuantifiable.setDef("GAUGE,600,U,U,10");
// define 5 different boxes:
// 1. granularity of 10s for the last hour
// 2. granularity of 1m for the last week
// 3. granularity of 15m for the last year
// 4. granularity of 1h for the last 5 years
// 5. granularity of 1d for the last 10 years
defaultQuantifiable
.addArchives("AVERAGE,0.5,1,360:AVERAGE,0.5,6,10080:LAST,0.5,90,36500:AVERAGE,0.5,8640,3650");
rrdDefs.put(DEFAULT_QUANTIFIABLE, defaultQuantifiable);
RrdDefConfig defaultOther = new RrdDefConfig(DEFAULT_OTHER);
// use 5 seconds as a step size for discrete values and allow a 1h silence between updates
defaultOther.setDef("GAUGE,3600,U,U,5");
// define 4 different boxes:
// 1. granularity of 5s for the last hour
// 2. granularity of 1m for the last week
// 3. granularity of 15m for the last year
// 4. granularity of 4h for the last 10 years
defaultOther.addArchives("LAST,0.5,1,1440:LAST,0.5,12,10080:LAST,0.5,180,35040:LAST,0.5,240,21900");
rrdDefs.put(DEFAULT_OTHER, defaultOther);
if (config.isEmpty()) {
logger.debug("using default configuration only");
return;
}
Iterator<String> keys = config.keySet().iterator();
while (keys.hasNext()) {
String key = keys.next();
if (key.equals("service.pid") || key.equals("component.name")) {
// ignore service.pid and name
continue;
}
String[] subkeys = key.split("\\.");
if (subkeys.length != 2) {
logger.debug("config '{}' should have the format 'name.configkey'", key);
continue;
}
Object v = config.get(key);
if (v instanceof String) {
String value = (String) v;
String name = subkeys[0].toLowerCase();
String property = subkeys[1].toLowerCase();
if (value.isBlank()) {
logger.trace("Config is empty: {}", property);
continue;
} else {
logger.trace("Processing config: {} = {}", property, value);
}
RrdDefConfig rrdDef = rrdDefs.get(name);
if (rrdDef == null) {
rrdDef = new RrdDefConfig(name);
rrdDefs.put(name, rrdDef);
}
try {
if (property.equals("def")) {
rrdDef.setDef(value);
} else if (property.equals("archives")) {
rrdDef.addArchives(value);
} else if (property.equals("items")) {
rrdDef.addItems(value);
} else {
logger.debug("Unknown property {} : {}", property, value);
}
} catch (IllegalArgumentException e) {
logger.warn("Ignoring illegal configuration: {}", e.getMessage());
}
}
}
for (RrdDefConfig rrdDef : rrdDefs.values()) {
if (rrdDef != null) {
if (rrdDef.isValid()) {
logger.debug("Created {}", rrdDef);
} else {
logger.info("Removing invalid definition {}", rrdDef);
rrdDefs.remove(rrdDef.name);
}
}
}
}
private class RrdArchiveDef {
public @Nullable ConsolFun fcn;
public double xff;
public int steps, rows;
@Override
public String toString() {
StringBuilder sb = new StringBuilder(" " + fcn);
sb.append(" xff = ").append(xff);
sb.append(" steps = ").append(steps);
sb.append(" rows = ").append(rows);
return sb.toString();
}
}
private class RrdDefConfig {
public String name;
public @Nullable DsType dsType;
public int heartbeat, step;
public double min, max;
public List<RrdArchiveDef> archives;
public List<String> itemNames;
private boolean isInitialized;
public RrdDefConfig(String name) {
this.name = name;
archives = new ArrayList<>();
itemNames = new ArrayList<>();
isInitialized = false;
}
public void setDef(String defString) {
String[] opts = defString.split(",");
if (opts.length != 5) { // check if correct number of parameters
logger.warn("invalid number of parameters {}: {}", name, defString);
return;
}
if (opts[0].equals("ABSOLUTE")) { // dsType
dsType = DsType.ABSOLUTE;
} else if (opts[0].equals("COUNTER")) {
dsType = DsType.COUNTER;
} else if (opts[0].equals("DERIVE")) {
dsType = DsType.DERIVE;
} else if (opts[0].equals("GAUGE")) {
dsType = DsType.GAUGE;
} else {
logger.warn("{}: dsType {} not supported", name, opts[0]);
}
heartbeat = Integer.parseInt(opts[1]);
if (opts[2].equals("U")) {
min = Double.NaN;
} else {
min = Double.parseDouble(opts[2]);
}
if (opts[3].equals("U")) {
max = Double.NaN;
} else {
max = Double.parseDouble(opts[3]);
}
step = Integer.parseInt(opts[4]);
isInitialized = true; // successfully initialized
return;
}
public void addArchives(String archivesString) {
String splitArchives[] = archivesString.split(":");
for (String archiveString : splitArchives) {
String[] opts = archiveString.split(",");
if (opts.length != 4) { // check if correct number of parameters
logger.warn("invalid number of parameters {}: {}", name, archiveString);
return;
}
RrdArchiveDef arc = new RrdArchiveDef();
if (opts[0].equals("AVERAGE")) {
arc.fcn = ConsolFun.AVERAGE;
} else if (opts[0].equals("MIN")) {
arc.fcn = ConsolFun.MIN;
} else if (opts[0].equals("MAX")) {
arc.fcn = ConsolFun.MAX;
} else if (opts[0].equals("LAST")) {
arc.fcn = ConsolFun.LAST;
} else if (opts[0].equals("FIRST")) {
arc.fcn = ConsolFun.FIRST;
} else if (opts[0].equals("TOTAL")) {
arc.fcn = ConsolFun.TOTAL;
} else {
logger.warn("{}: consolidation function {} not supported", name, opts[0]);
}
arc.xff = Double.parseDouble(opts[1]);
arc.steps = Integer.parseInt(opts[2]);
arc.rows = Integer.parseInt(opts[3]);
archives.add(arc);
}
}
public void addItems(String itemsString) {
String splitItems[] = itemsString.split(",");
for (String item : splitItems) {
itemNames.add(item);
}
}
public boolean appliesTo(String item) {
return itemNames.contains(item);
}
public boolean isValid() { // a valid configuration must be initialized
// and contain at least one function
return (isInitialized && (archives.size() > 0));
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder(name);
sb.append(" = ").append(dsType);
sb.append(" heartbeat = ").append(heartbeat);
sb.append(" min/max = ").append(min).append("/").append(max);
sb.append(" step = ").append(step);
sb.append(" ").append(archives.size()).append(" archives(s) = [");
for (RrdArchiveDef arc : archives) {
sb.append(arc.toString());
}
sb.append("] ");
sb.append(itemNames.size()).append(" items(s) = [");
for (String item : itemNames) {
sb.append(item).append(" ");
}
sb.append("]");
return sb.toString();
}
}
@Override
public List<PersistenceStrategy> getDefaultStrategies() {
return List.of(PersistenceStrategy.Globals.RESTORE, PersistenceStrategy.Globals.CHANGE,
new PersistenceCronStrategy("everyMinute", "0 * * * * ?"));
}
}

View File

@@ -0,0 +1,286 @@
/**
* Copyright (c) 2010-2020 Contributors to the openHAB project
*
* See the NOTICE file(s) distributed with this work for additional
* information.
*
* This program and the accompanying materials are made available under the
* terms of the Eclipse Public License 2.0 which is available at
* http://www.eclipse.org/legal/epl-2.0
*
* SPDX-License-Identifier: EPL-2.0
*/
package org.openhab.persistence.rrd4j.internal.charts;
import java.awt.Color;
import java.awt.Font;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.Date;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Map;
import javax.imageio.ImageIO;
import javax.servlet.Servlet;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import org.openhab.core.items.GroupItem;
import org.openhab.core.items.Item;
import org.openhab.core.items.ItemNotFoundException;
import org.openhab.core.library.items.NumberItem;
import org.openhab.core.ui.chart.ChartProvider;
import org.openhab.core.ui.items.ItemUIRegistry;
import org.openhab.persistence.rrd4j.internal.RRD4jPersistenceService;
import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Deactivate;
import org.osgi.service.component.annotations.Reference;
import org.osgi.service.http.HttpService;
import org.osgi.service.http.NamespaceException;
import org.rrd4j.ConsolFun;
import org.rrd4j.core.RrdDb;
import org.rrd4j.graph.RrdGraph;
import org.rrd4j.graph.RrdGraphDef;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* This servlet generates time-series charts for a given set of items.
* It accepts the following HTTP parameters:
* <ul>
* <li>w: width in pixels of image to generate</li>
* <li>h: height in pixels of image to generate</li>
* <li>period: the time span for the x-axis. Value can be h,4h,8h,12h,D,3D,W,2W,M,2M,4M,Y</li>
* <li>items: A comma separated list of item names to display
* <li>groups: A comma separated list of group names, whose members should be displayed
* </ul>
*
* @author Kai Kreuzer - Initial contribution
* @author Chris Jackson - a few improvements
* @author Jan N. Klug - a few improvements
*
*/
@Component(service = ChartProvider.class)
public class RRD4jChartServlet implements Servlet, ChartProvider {
private final Logger logger = LoggerFactory.getLogger(RRD4jChartServlet.class);
/** the URI of this servlet */
public static final String SERVLET_NAME = "/rrdchart.png";
protected static final Color[] LINECOLORS = new Color[] { Color.RED, Color.GREEN, Color.BLUE, Color.MAGENTA,
Color.ORANGE, Color.CYAN, Color.PINK, Color.DARK_GRAY, Color.YELLOW };
protected static final Color[] AREACOLORS = new Color[] { new Color(255, 0, 0, 30), new Color(0, 255, 0, 30),
new Color(0, 0, 255, 30), new Color(255, 0, 255, 30), new Color(255, 128, 0, 30),
new Color(0, 255, 255, 30), new Color(255, 0, 128, 30), new Color(255, 128, 128, 30),
new Color(255, 255, 0, 30) };
protected static final Map<String, Long> PERIODS = new HashMap<>();
static {
PERIODS.put("h", -3600000L);
PERIODS.put("4h", -14400000L);
PERIODS.put("8h", -28800000L);
PERIODS.put("12h", -43200000L);
PERIODS.put("D", -86400000L);
PERIODS.put("3D", -259200000L);
PERIODS.put("W", -604800000L);
PERIODS.put("2W", -1209600000L);
PERIODS.put("M", -2592000000L);
PERIODS.put("2M", -5184000000L);
PERIODS.put("4M", -10368000000L);
PERIODS.put("Y", -31536000000L);
}
@Reference
protected HttpService httpService;
@Reference
protected ItemUIRegistry itemUIRegistry;
@Activate
protected void activate() {
try {
logger.debug("Starting up rrd chart servlet at {}", SERVLET_NAME);
httpService.registerServlet(SERVLET_NAME, this, new Hashtable<>(), httpService.createDefaultHttpContext());
} catch (NamespaceException e) {
logger.error("Error during servlet startup", e);
} catch (ServletException e) {
logger.error("Error during servlet startup", e);
}
}
@Deactivate
protected void deactivate() {
httpService.unregister(SERVLET_NAME);
}
@Override
public void service(ServletRequest req, ServletResponse res) throws ServletException, IOException {
logger.debug("RRD4J received incoming chart request: {}", req);
int width = 480;
try {
width = Integer.parseInt(req.getParameter("w"));
} catch (Exception e) {
}
int height = 240;
try {
height = Integer.parseInt(req.getParameter("h"));
} catch (Exception e) {
}
Long period = PERIODS.get(req.getParameter("period"));
if (period == null) {
// use a day as the default period
period = PERIODS.get("D");
}
// Create the start and stop time
Date timeEnd = new Date();
Date timeBegin = new Date(timeEnd.getTime() + period);
// Set the content type to that provided by the chart provider
res.setContentType("image/" + getChartType());
try {
BufferedImage chart = createChart(null, null, timeBegin, timeEnd, height, width, req.getParameter("items"),
req.getParameter("groups"), null, null);
ImageIO.write(chart, getChartType().toString(), res.getOutputStream());
} catch (ItemNotFoundException e) {
logger.debug("Item not found error while generating chart.");
} catch (IllegalArgumentException e) {
logger.debug("Illegal argument in chart", e);
}
}
/**
* Adds a line for the item to the graph definition.
* The color of the line is determined by the counter, it simply picks the according index from LINECOLORS (and
* rolls over if necessary).
*
* @param graphDef the graph definition to fill
* @param item the item to add a line for
* @param counter defines the number of the datasource and is used to determine the line color
*/
protected void addLine(RrdGraphDef graphDef, Item item, int counter) {
Color color = LINECOLORS[counter % LINECOLORS.length];
String label = itemUIRegistry.getLabel(item.getName());
String rrdName = RRD4jPersistenceService.DB_FOLDER + File.separator + item.getName() + ".rrd";
ConsolFun consolFun;
if (label != null && label.contains("[") && label.contains("]")) {
label = label.substring(0, label.indexOf('['));
}
try {
RrdDb db = new RrdDb(rrdName);
consolFun = db.getRrdDef().getArcDefs()[0].getConsolFun();
db.close();
} catch (IOException e) {
consolFun = ConsolFun.MAX;
}
if (item instanceof NumberItem) {
// we only draw a line
graphDef.datasource(Integer.toString(counter), rrdName, "state", consolFun); // RRD4jService.getConsolidationFunction(item));
graphDef.line(Integer.toString(counter), color, label, 2);
} else {
// we draw a line and fill the area beneath it with a transparent color
graphDef.datasource(Integer.toString(counter), rrdName, "state", consolFun); // RRD4jService.getConsolidationFunction(item));
Color areaColor = AREACOLORS[counter % LINECOLORS.length];
graphDef.area(Integer.toString(counter), areaColor);
graphDef.line(Integer.toString(counter), color, label, 2);
}
}
@Override
public void init(ServletConfig config) throws ServletException {
}
@Override
public ServletConfig getServletConfig() {
return null;
}
@Override
public String getServletInfo() {
return null;
}
@Override
public void destroy() {
}
// ----------------------------------------------------------
// The following methods implement the ChartServlet interface
@Override
public String getName() {
return "rrd4j";
}
@Override
public BufferedImage createChart(String service, String theme, Date startTime, Date endTime, int height, int width,
String items, String groups, Integer dpi, Boolean legend) throws ItemNotFoundException {
RrdGraphDef graphDef = new RrdGraphDef();
long period = (startTime.getTime() - endTime.getTime()) / 1000;
graphDef.setWidth(width);
graphDef.setHeight(height);
graphDef.setAntiAliasing(true);
graphDef.setImageFormat("PNG");
graphDef.setStartTime(period);
graphDef.setTextAntiAliasing(true);
graphDef.setLargeFont(new Font("SansSerif", Font.PLAIN, 15));
graphDef.setSmallFont(new Font("SansSerif", Font.PLAIN, 11));
int seriesCounter = 0;
// Loop through all the items
if (items != null) {
String[] itemNames = items.split(",");
for (String itemName : itemNames) {
Item item = itemUIRegistry.getItem(itemName);
addLine(graphDef, item, seriesCounter++);
}
}
// Loop through all the groups and add each item from each group
if (groups != null) {
String[] groupNames = groups.split(",");
for (String groupName : groupNames) {
Item item = itemUIRegistry.getItem(groupName);
if (item instanceof GroupItem) {
GroupItem groupItem = (GroupItem) item;
for (Item member : groupItem.getMembers()) {
addLine(graphDef, member, seriesCounter++);
}
} else {
throw new ItemNotFoundException("Item '" + item.getName() + "' defined in groups is not a group.");
}
}
}
// Write the chart as a PNG image
RrdGraph graph;
try {
graph = new RrdGraph(graphDef);
BufferedImage bi = new BufferedImage(graph.getRrdGraphInfo().getWidth(),
graph.getRrdGraphInfo().getHeight(), BufferedImage.TYPE_INT_RGB);
graph.render(bi.getGraphics());
return bi;
} catch (IOException e) {
logger.error("Error generating graph.", e);
}
return null;
}
@Override
public ImageType getChartType() {
return ImageType.png;
}
}