Release 2.23.2

- Add QPACK stats collection and experimentation mode, see the new
  es_qpack_experiment setting.
- Log busy connection stats every second using the new "conn-stats"
  log module.
- Log about skipping only once.
- Update HTTP/3 greased frame type formula.
- Use ls-qpack v2.2.1.
This commit is contained in:
Dmitri Tikhonov 2020-10-21 09:05:26 -04:00
parent 6a6683860a
commit 758aff32b9
30 changed files with 800 additions and 33 deletions

View file

@ -1,3 +1,13 @@
2020-10-21
- 2.23.2
- Add QPACK stats collection and experimentation mode, see the new
es_qpack_experiment setting.
- Log busy connection stats every second using the new "conn-stats"
log module.
- Log about skipping only once.
- Update HTTP/3 greased frame type formula.
- Use ls-qpack v2.2.1.
2020-10-13
- 2.23.1
- [FEATURE] IETF Client 0-RTT support.

View file

@ -72,6 +72,10 @@ ELSE()
#SET(MY_CMAKE_FLAGS "${MY_CMAKE_FLAGS} -DLSQUIC_LOWEST_LOG_LEVEL=LSQ_LOG_INFO")
ENDIF()
IF (NOT "$ENV{EXTRA_CFLAGS}" MATCHES "-DLSQUIC_CONN_STATS=")
SET(MY_CMAKE_FLAGS "${MY_CMAKE_FLAGS} -DLSQUIC_CONN_STATS=1")
ENDIF()
IF (LSQUIC_DEVEL)
SET(MY_CMAKE_FLAGS "${MY_CMAKE_FLAGS} -DLSQUIC_DEVEL=1")
ENDIF()

View file

@ -11,11 +11,10 @@ Description
LiteSpeed QUIC (LSQUIC) Library is an open-source implementation of QUIC
and HTTP/3 functionality for servers and clients. Most of the code in this
distribution is used in our own products: LiteSpeed Web Server, LiteSpeed ADC,
and OpenLiteSpeed. Do not hesitate to report bugs back to us. Even better,
send us fixes and improvements!
and OpenLiteSpeed.
Currently supported QUIC versions are Q043, Q046, Q050, ID-27, ID-28, ID-29,
ID-30, and ID-31. Support for newer versions will be added soon after they
ID-30, and ID-31. Support for newer versions is added soon after they
are released.
Documentation
@ -157,6 +156,12 @@ The library has been tested on the following platforms:
- Windows
- x86_64
Get Involved
------------
Do not hesitate to report bugs back to us. Even better, send us fixes
and improvements!
Have fun,
LiteSpeed QUIC Team.

View file

@ -68,10 +68,11 @@ duck_client_on_conn_closed (lsquic_conn_t *conn)
static ssize_t
duck_client_on_dg_write (lsquic_conn_t *conn, void *buf, size_t sz)
{
int s;
/* We only write one request */
s = lsquic_conn_want_datagram_write(conn, 0);
#ifndef NDEBUG
int s =
#endif
lsquic_conn_want_datagram_write(conn, 0);
assert(s == 1); /* Old value was "yes, we want to write a datagram" */
if (sz >= sizeof(REQUEST) - 1)

View file

@ -1952,6 +1952,11 @@ set_engine_option (struct lsquic_engine_settings *settings,
settings->es_proc_time_thresh = atoi(val);
return 0;
}
if (0 == strncmp(name, "qpack_experiment", 16))
{
settings->es_qpack_experiment = atoi(val);
return 0;
}
break;
case 18:
if (0 == strncmp(name, "qpack_enc_max_size", 18))

View file

@ -849,6 +849,20 @@ settings structure:
Default value is :macro:`LSQUIC_DF_EXT_HTTP_PRIO`
.. member:: int es_qpack_experiment
If set to 1, QPACK statistics are logged per connection.
If set to 2, QPACK experiments are run. In this mode, encoder
and decoder setting values are randomly selected (from the range
[0, whatever is specified in es_qpack_(enc|dec)_*]) and these
values along with compression ratio and user agent are logged at
NOTICE level when connection is destroyed. The purpose of these
experiments is to use compression performance statistics to figure
out a good set of default values.
Default value is :macro:`LSQUIC_DF_QPACK_EXPERIMENT`
To initialize the settings structure to library defaults, use the following
convenience function:
@ -1077,6 +1091,10 @@ out of date. Please check your :file:`lsquic.h` for actual values.*
Turn on Extensible HTTP Priorities by default.
.. macro:: LSQUIC_DF_QPACK_EXPERIMENT
By default, QPACK experiments are turned off.
Receiving Packets
-----------------

View file

@ -26,7 +26,7 @@ author = u'LiteSpeed Technologies'
# The short X.Y version
version = u'2.23'
# The full version, including alpha/beta/rc tags
release = u'2.23.1'
release = u'2.23.2'
# -- General configuration ---------------------------------------------------

View file

@ -26,3 +26,22 @@ server connections. It will be just work. For example, the single
engine settings :type:`lsquic_engine_settings` will have to be separated
into client and server settings, as the two usually do need to have
separate settings.
Example Programs
================
*http_client does not work with www.google.com, www.facebook.com, etc.*
Check the version. By defaut, ``http_client`` will use the latest supported
version (at the time of this writing, "h3-31"), while the server may be using
an older version, such as "h3-29". Adding ``-o version=h3-29`` to the
command line may well solve your issue.
There is an `outstanding bug`_ where lsquic client does not perform version
negotiation correctly for HTTP/3. We do not expect this to be fixed, because
a) this version negotiation mechanism is likely to become defunct when QUIC v1
is released and b) version negotiation is not necessary for an HTTP/3 client,
because the other side's version is communicated to it via the ``Alt-Svc`` HTTP
header.
.. _`outstanding bug`: https://github.com/litespeedtech/lsquic/issues/180

View file

@ -25,7 +25,7 @@ extern "C" {
#define LSQUIC_MAJOR_VERSION 2
#define LSQUIC_MINOR_VERSION 23
#define LSQUIC_PATCH_VERSION 1
#define LSQUIC_PATCH_VERSION 2
/**
* Engine flags:
@ -341,6 +341,9 @@ typedef struct ssl_ctx_st * (*lsquic_lookup_cert_f)(
#define LSQUIC_DF_QPACK_ENC_MAX_BLOCKED 100
#define LSQUIC_DF_QPACK_ENC_MAX_SIZE 4096
/* By default, QPACK experiments are turned off */
#define LSQUIC_DF_QPACK_EXPERIMENT 0
/** ECN is disabled by default */
#define LSQUIC_DF_ECN 0
@ -950,6 +953,21 @@ struct lsquic_engine_settings {
* Default value is @ref LSQUIC_DF_EXT_HTTP_PRIO
*/
int es_ext_http_prio;
/**
* If set to 1, QPACK statistics are logged per connection.
*
* If set to 2, QPACK experiments are run. In this mode, encoder
* and decoder setting values are randomly selected (from the range
* [0, whatever is specified in es_qpack_(enc|dec)_*]) and these
* values along with compression ratio and user agent are logged at
* NOTICE level when connection is destroyed. The purpose of these
* experiments is to use compression performance statistics to figure
* out a good set of default values.
*
* Default value is @ref LSQUIC_DF_QPACK_EXPERIMENT.
*/
int es_qpack_experiment;
};
/* Initialize `settings' to default values */

View file

@ -64,6 +64,7 @@ SET(lsquic_STAT_SRCS
lsquic_qdec_hdl.c
lsquic_qenc_hdl.c
lsquic_qlog.c
lsquic_qpack_exp.c
lsquic_rechist.c
lsquic_rtt.c
lsquic_send_ctl.c

View file

@ -70,6 +70,7 @@ liblsquic_a_SOURCES = \
lsquic_qdec_hdl.c \
lsquic_qenc_hdl.c \
lsquic_qlog.c \
lsquic_qpack_exp.c \
lsquic_rechist.c \
lsquic_rtt.c \
lsquic_send_ctl.c \

View file

@ -305,3 +305,22 @@ lsquic_conn_get_min_datagram_size (struct lsquic_conn *lconn)
else
return 0;
}
#if LSQUIC_CONN_STATS
void
lsquic_conn_stats_diff (const struct conn_stats *cumulative_stats,
const struct conn_stats *previous_stats,
struct conn_stats *new_stats)
{
const unsigned long *const cum = (void *) cumulative_stats,
*const prev = (void *) previous_stats;
unsigned long *const new = (void *) new_stats;
unsigned i;
for (i = 0; i < sizeof(*new_stats) / sizeof(new[0]); ++i)
new[i] = cum[i] - prev[i];
}
#endif

View file

@ -151,6 +151,9 @@ struct conn_iface
#if LSQUIC_CONN_STATS
const struct conn_stats *
(*ci_get_stats) (struct lsquic_conn *);
void
(*ci_log_stats) (struct lsquic_conn *);
#endif
void
@ -424,12 +427,18 @@ struct conn_stats {
unsigned long acks;
unsigned long packets; /* Number of sent packets */
unsigned long acked_via_loss; /* Number of packets acked via loss record */
unsigned long lost_packets;
unsigned long retx_packets; /* Number of retransmitted packets */
unsigned long bytes; /* Overall bytes out */
unsigned long headers_uncomp; /* Sum of uncompressed header bytes */
unsigned long headers_comp; /* Sum of compressed header bytes */
} out;
};
void
lsquic_conn_stats_diff (const struct conn_stats *cumulative,
const struct conn_stats *previous,
struct conn_stats *new);
#endif
#endif

View file

@ -103,6 +103,7 @@
#define MIN(a, b) ((a) < (b) ? (a) : (b))
/* The batch of outgoing packets grows and shrinks dynamically */
/* Batch sizes must be powers of two */
#define MAX_OUT_BATCH_SIZE 1024
#define MIN_OUT_BATCH_SIZE 4
#define INITIAL_OUT_BATCH_SIZE 32
@ -132,6 +133,11 @@ engine_decref_conn (lsquic_engine_t *engine, lsquic_conn_t *conn,
static void
force_close_conn (lsquic_engine_t *engine, lsquic_conn_t *conn);
#if LSQUIC_CONN_STATS
static void
update_busy_detector (struct lsquic_engine *, struct lsquic_conn *, int);
#endif
#if LSQUIC_COUNT_ENGINE_CALLS
#define ENGINE_CALLS_INCR(e) do { ++(e)->n_engine_calls; } while (0)
#else
@ -276,6 +282,16 @@ struct lsquic_engine
#endif
struct crand crand;
EVP_AEAD_CTX retry_aead_ctx[N_IETF_RETRY_VERSIONS];
#if LSQUIC_CONN_STATS
struct {
uint16_t immed_ticks; /* bitmask */
#define MAX_IMMED_TICKS UINT16_MAX
struct lsquic_conn *last_conn, /* from last call */
*pin_conn, /* last connection with packet in */
*current; /* currently busy connection */
lsquic_time_t last_log;
} busy;
#endif
};
@ -896,6 +912,19 @@ destroy_conn (struct lsquic_engine *engine, struct lsquic_conn *conn,
}
#if LSQUIC_CONN_STATS
update_stats_sum(engine, conn);
if (engine->busy.last_conn == conn)
engine->busy.last_conn = NULL;
if (engine->busy.current == conn)
{
char cidstr[MAX_CID_LEN * 2 + 1];
lsquic_logger_log1(LSQ_LOG_NOTICE, LSQLM_CONN_STATS,
"busy connection %s is destroyed",
(lsquic_cid2str(lsquic_conn_log_cid(conn), cidstr), cidstr));
engine->busy.current = NULL;
engine->busy.last_log = 0;
}
if (engine->busy.pin_conn == conn)
engine->busy.pin_conn = NULL;
#endif
--engine->n_conns;
conn->cn_flags |= LSCONN_NEVER_TICKABLE;
@ -1503,6 +1532,9 @@ process_packet_in (lsquic_engine_t *engine, lsquic_packet_in_t *packet_in,
packet_in_data = packet_in->pi_data;
packet_in_size = packet_in->pi_data_sz;
conn->cn_if->ci_packet_in(conn, packet_in);
#if LSQUIC_CONN_STATS
engine->busy.pin_conn = conn;
#endif
QLOG_PACKET_RX(lsquic_conn_log_cid(conn), packet_in, packet_in_data, packet_in_size);
lsquic_packet_in_put(&engine->pub.enp_mm, packet_in);
return 0;
@ -1588,8 +1620,7 @@ lsquic_engine_destroy (lsquic_engine_t *engine)
: 0);
fprintf(engine->stats_fh, " ACK frames: %lu\n", stats->in.n_acks);
fprintf(engine->stats_fh, " ACK frames processed: %lu\n", stats->in.n_acks_proc);
fprintf(engine->stats_fh, " ACK frames merged to new: %lu\n", stats->in.n_acks_merged[0]);
fprintf(engine->stats_fh, " ACK frames merged to old: %lu\n", stats->in.n_acks_merged[1]);
fprintf(engine->stats_fh, " ACK frames merged: %lu\n", stats->in.n_acks_merged);
fprintf(engine->stats_fh, "Out:\n");
fprintf(engine->stats_fh, " Total bytes: %lu\n", stats->out.bytes);
fprintf(engine->stats_fh, " packets: %lu\n", stats->out.packets);
@ -1956,6 +1987,14 @@ lsquic_engine_process_conns (lsquic_engine_t *engine)
lsquic_conn_t *conn;
lsquic_time_t now;
#if LSQUIC_CONN_STATS
if (engine->busy.pin_conn)
{
update_busy_detector(engine, engine->busy.pin_conn, 1);
engine->busy.pin_conn = NULL;
}
#endif
ENGINE_IN(engine);
now = lsquic_time_now();
@ -2263,6 +2302,27 @@ apply_hp (struct conns_out_iter *iter)
}
#if LSQUIC_CONN_STATS
static void
update_batch_size_stats (struct lsquic_engine *engine, unsigned batch_size)
{
struct batch_size_stats *const stats = &engine->pub.enp_batch_size_stats;
++stats->count;
if (batch_size > stats->max)
stats->max = batch_size;
if (batch_size < stats->min || 0 == stats->min)
stats->min = batch_size;
if (stats->avg)
stats->avg = ((float) batch_size - stats->avg) * 0.4 + stats->avg;
else
stats->avg = (float) batch_size;
}
#endif
static unsigned
send_batch (lsquic_engine_t *engine, const struct send_batch_ctx *sb_ctx,
unsigned n_to_send)
@ -2274,6 +2334,10 @@ send_batch (lsquic_engine_t *engine, const struct send_batch_ctx *sb_ctx,
CONST_BATCH struct out_batch *const batch = sb_ctx->batch;
struct lsquic_packet_out *CONST_BATCH *packet_out, *CONST_BATCH *end;
#if LSQUIC_CONN_STATS
update_batch_size_stats(engine, n_to_send);
#endif
apply_hp(sb_ctx->conns_iter);
#if CAN_LOSE_PACKETS
if (engine->flags & ENG_LOSE_PACKETS)
@ -2690,6 +2754,39 @@ next_new_full_conn (struct conns_stailq *new_full_conns)
}
#if LSQUIC_CONN_STATS
static void
maybe_log_conn_stats (struct lsquic_engine *engine, struct lsquic_conn *conn,
lsquic_time_t now)
{
char cidstr[MAX_CID_LEN * 2 + 1];
if (!LSQ_LOG_ENABLED_EXT(LSQ_LOG_NOTICE, LSQLM_CONN_STATS))
return;
if (conn->cn_last_ticked + 1000000 >= now)
{
if (0 == engine->busy.last_log
|| engine->busy.last_log + 1000000 - 1000 < now)
{
engine->busy.last_log = now;
conn->cn_if->ci_log_stats(conn);
}
}
else
{
lsquic_logger_log1(LSQ_LOG_NOTICE, LSQLM_CONN_STATS,
"stop logging status for connection %s: no longer busy",
(lsquic_cid2str(lsquic_conn_log_cid(conn), cidstr), cidstr));
engine->busy.current = NULL;
engine->busy.last_log = 0;
}
}
#endif
static void
process_connections (lsquic_engine_t *engine, conn_iter_f next_conn,
lsquic_time_t now)
@ -2726,6 +2823,10 @@ process_connections (lsquic_engine_t *engine, conn_iter_f next_conn,
|| (conn = next_new_full_conn(&new_full_conns)))
{
tick_st = conn->cn_if->ci_tick(conn, now);
#if LSQUIC_CONN_STATS
if (conn == engine->busy.current)
maybe_log_conn_stats(engine, conn, now);
#endif
conn->cn_last_ticked = now + i /* Maintain relative order */ ++;
if (tick_st & TICK_PROMOTE)
{
@ -2973,13 +3074,49 @@ lsquic_engine_cooldown (lsquic_engine_t *engine)
}
#if LSQUIC_CONN_STATS
static void
update_busy_detector (struct lsquic_engine *engine, struct lsquic_conn *conn,
int immed)
{
char cidstr[MAX_CID_LEN * 2 + 1];
if (!(LSQ_LOG_ENABLED_EXT(LSQ_LOG_NOTICE, LSQLM_CONN_STATS)
&& conn->cn_if->ci_log_stats))
return;
if (conn == engine->busy.last_conn)
{
engine->busy.immed_ticks <<= 1u;
engine->busy.immed_ticks |= immed;
if (MAX_IMMED_TICKS == engine->busy.immed_ticks)
{
if (engine->busy.current != conn)
lsquic_logger_log1(LSQ_LOG_NOTICE, LSQLM_CONN_STATS,
"connection %s marked busy: it's had %u immediate ticks "
"in a row",
(lsquic_cid2str(lsquic_conn_log_cid(conn), cidstr), cidstr),
(unsigned) (sizeof(engine->busy.immed_ticks) * 8));
engine->busy.current = conn;
}
}
else
engine->busy.immed_ticks <<= 1;
engine->busy.last_conn = conn;
}
#endif
int
lsquic_engine_earliest_adv_tick (lsquic_engine_t *engine, int *diff)
{
const struct attq_elem *next_attq;
lsquic_time_t now, next_time;
#if LSQUIC_DEBUG_NEXT_ADV_TICK
const struct lsquic_conn *conn;
#if LSQUIC_DEBUG_NEXT_ADV_TICK || LSQUIC_CONN_STATS
struct lsquic_conn *conn;
const enum lsq_log_level L = LSQ_LOG_DEBUG; /* Easy toggle */
#endif
@ -2996,6 +3133,10 @@ lsquic_engine_earliest_adv_tick (lsquic_engine_t *engine, int *diff)
lsquic_mh_count(&engine->conns_out),
lsquic_mh_count(&engine->conns_out) != 1, "s",
CID_BITS(lsquic_conn_log_cid(conn)));
#endif
#if LSQUIC_CONN_STATS
conn = lsquic_mh_peek(&engine->conns_out);
update_busy_detector(engine, conn, 1);
#endif
*diff = 0;
return 1;
@ -3021,6 +3162,10 @@ lsquic_engine_earliest_adv_tick (lsquic_engine_t *engine, int *diff)
lsquic_mh_count(&engine->conns_tickable),
lsquic_mh_count(&engine->conns_tickable) != 1, "s",
CID_BITS(lsquic_conn_log_cid(conn)));
#endif
#if LSQUIC_CONN_STATS
conn = lsquic_mh_peek(&engine->conns_tickable);
update_busy_detector(engine, conn, 1);
#endif
*diff = 0;
return 1;
@ -3075,6 +3220,25 @@ lsquic_engine_earliest_adv_tick (lsquic_engine_t *engine, int *diff)
else
LSQ_LOG(L, "next advisory tick is %d usec away: resume sending", *diff);
#endif
#if LSQUIC_CONN_STATS
if (next_attq)
update_busy_detector(engine, next_attq->ae_conn,
/* Immediate if: a) time is now or in the past */
*diff <= 0
/* b) next event is pacer, which means that the
* connection wants to send, but is prevented
* by the pacer from doing so.
*/
|| next_attq->ae_why == AEW_PACER
/* c) next event is to retransmit data (meaning
* that there is data in flight) and the
* time is small, which implies a small RTT.
*/
|| (next_attq->ae_why == N_AEWS + AL_RETX_APP
&& *diff < 5000));
#endif
return 1;
}

View file

@ -83,6 +83,13 @@ struct lsquic_engine_public {
*/
unsigned char enp_quic_ctx_buf[2][200];
unsigned enp_quic_ctx_sz[2];
#if LSQUIC_CONN_STATS
struct batch_size_stats {
unsigned min, max, /* Minimum and maximum batch sizes */
count; /* Number of batches sent */
float avg; /* Average batch size */
} enp_batch_size_stats;
#endif
};
/* Put connection onto the Tickable Queue if it is not already on it. If

View file

@ -203,6 +203,7 @@ struct full_conn
} fc_hsk_ctx;
#if LSQUIC_CONN_STATS
struct conn_stats fc_stats;
struct conn_stats *fc_last_stats;
#endif
#if KEEP_CLOSED_STREAM_HISTORY
/* Rolling log of histories of closed streams. Older entries are
@ -1117,6 +1118,7 @@ full_conn_ci_destroy (lsquic_conn_t *lconn)
LSQ_NOTICE("ACKs: in: %lu; processed: %lu; merged: %lu",
conn->fc_stats.in.n_acks, conn->fc_stats.in.n_acks_proc,
conn->fc_stats.in.n_acks_merged);
free(conn->fc_last_stats);
#endif
while ((sitr = STAILQ_FIRST(&conn->fc_stream_ids_to_reset)))
{
@ -4439,6 +4441,44 @@ full_conn_ci_get_stats (struct lsquic_conn *lconn)
struct full_conn *conn = (struct full_conn *) lconn;
return &conn->fc_stats;
}
#include "lsquic_cong_ctl.h"
static void
full_conn_ci_log_stats (struct lsquic_conn *lconn)
{
struct full_conn *conn = (struct full_conn *) lconn;
struct batch_size_stats *const bs = &conn->fc_enpub->enp_batch_size_stats;
struct conn_stats diff_stats;
uint64_t cwnd;
char cidstr[MAX_CID_LEN * 2 + 1];
if (!conn->fc_last_stats)
{
conn->fc_last_stats = calloc(1, sizeof(*conn->fc_last_stats));
if (!conn->fc_last_stats)
return;
LSQ_DEBUG("allocated last stats");
}
cwnd = conn->fc_send_ctl.sc_ci->cci_get_cwnd(
conn->fc_send_ctl.sc_cong_ctl);
lsquic_conn_stats_diff(&conn->fc_stats, conn->fc_last_stats, &diff_stats);
lsquic_logger_log1(LSQ_LOG_NOTICE, LSQLM_CONN_STATS,
"%s: ticks: %lu; cwnd: %"PRIu64"; conn flow: max: %"PRIu64
", avail: %"PRIu64"; packets: sent: %lu, lost: %lu, retx: %lu, rcvd: %lu"
"; batch: count: %u; min: %u; max: %u; avg: %.2f",
(lsquic_cid2str(LSQUIC_LOG_CONN_ID, cidstr), cidstr),
diff_stats.n_ticks, cwnd,
conn->fc_pub.conn_cap.cc_max,
lsquic_conn_cap_avail(&conn->fc_pub.conn_cap),
diff_stats.out.packets, diff_stats.out.lost_packets,
diff_stats.out.retx_packets, diff_stats.in.packets,
bs->count, bs->min, bs->max, bs->avg);
*conn->fc_last_stats = conn->fc_stats;
memset(bs, 0, sizeof(*bs));
}
#endif
@ -4470,6 +4510,7 @@ static const struct conn_iface full_conn_iface = {
.ci_get_path = full_conn_ci_get_path,
#if LSQUIC_CONN_STATS
.ci_get_stats = full_conn_ci_get_stats,
.ci_log_stats = full_conn_ci_log_stats,
#endif
.ci_going_away = full_conn_ci_going_away,
.ci_hsk_done = full_conn_ci_hsk_done,

View file

@ -76,6 +76,7 @@
#include "lsquic_headers.h"
#include "lsquic_crand.h"
#include "ls-sfparser.h"
#include "lsquic_qpack_exp.h"
#define LSQUIC_LOGGER_MODULE LSQLM_CONN
#define LSQUIC_LOG_CONN_ID ietf_full_conn_ci_get_log_cid(&conn->ifc_conn)
@ -512,6 +513,10 @@ struct ietf_full_conn
unsigned short ifc_min_dg_sz,
ifc_max_dg_sz;
struct inc_ack_stats ifc_ias;
#if LSQUIC_CONN_STATS
struct conn_stats ifc_stats,
*ifc_last_stats;
#endif
struct ack_info ifc_ack;
};
@ -525,6 +530,14 @@ struct ietf_full_conn
#define NPATH2CPATH(npath_) ((struct conn_path *) \
((char *) (npath_) - offsetof(struct conn_path, cop_path)))
#if LSQUIC_CONN_STATS
#define CONN_STATS(what_, count_) do { \
conn->ifc_stats.what_ += (count_); \
} while (0)
#else
#define CONN_STATS(what_, count_)
#endif
static const struct ver_neg server_ver_neg;
static const struct conn_iface *ietf_full_conn_iface_ptr;
@ -1221,6 +1234,9 @@ ietf_full_conn_init (struct ietf_full_conn *conn,
conn->ifc_pub.send_ctl = &conn->ifc_send_ctl;
conn->ifc_pub.enpub = enpub;
conn->ifc_pub.mm = &enpub->enp_mm;
#if LSQUIC_CONN_STATS
conn->ifc_pub.conn_stats = &conn->ifc_stats;
#endif
conn->ifc_pub.path = CUR_NPATH(conn);
TAILQ_INIT(&conn->ifc_pub.sending_streams);
TAILQ_INIT(&conn->ifc_pub.read_streams);
@ -1847,6 +1863,7 @@ generate_ack_frame_for_pns (struct ietf_full_conn *conn,
ABORT_ERROR("generating ACK frame failed: %d", errno);
return -1;
}
CONN_STATS(out.acks, 1);
char buf[0x100];
lsquic_hexstr(packet_out->po_data + packet_out->po_data_sz, w, buf, sizeof(buf));
LSQ_DEBUG("ACK bytes: %s", buf);
@ -3076,6 +3093,24 @@ ietf_full_conn_ci_destroy (struct lsquic_conn *lconn)
lsquic_hash_destroy(conn->ifc_bpus);
}
lsquic_hash_destroy(conn->ifc_pub.all_streams);
#if LSQUIC_CONN_STATS
if (conn->ifc_flags & IFC_CREATED_OK)
{
LSQ_NOTICE("# ticks: %lu", conn->ifc_stats.n_ticks);
LSQ_NOTICE("received %lu packets, of which %lu were not decryptable, %lu were "
"dups and %lu were errors; sent %lu packets, avg stream data per outgoing"
" packet is %lu bytes",
conn->ifc_stats.in.packets, conn->ifc_stats.in.undec_packets,
conn->ifc_stats.in.dup_packets, conn->ifc_stats.in.err_packets,
conn->ifc_stats.out.packets,
conn->ifc_stats.out.stream_data_sz / conn->ifc_stats.out.packets);
LSQ_NOTICE("ACKs: in: %lu; processed: %lu; merged: %lu",
conn->ifc_stats.in.n_acks, conn->ifc_stats.in.n_acks_proc,
conn->ifc_stats.in.n_acks_merged);
}
if (conn->ifc_last_stats)
free(conn->ifc_last_stats);
#endif
EV_LOG_CONN_EVENT(LSQUIC_LOG_CONN_ID, "full connection destroyed");
free(conn->ifc_errmsg);
free(conn);
@ -3503,11 +3538,52 @@ apply_trans_params (struct ietf_full_conn *conn,
}
static void
randomize_qpack_settings (struct ietf_full_conn *conn, const char *side,
unsigned *dyn_table_size, unsigned *max_risked_streams)
{
const unsigned char nybble = lsquic_crand_get_nybble(
conn->ifc_enpub->enp_crand);
/* For each setting, select one of four levels:
* Table size: 0, 1/4, 1/2, and 1/1 of dyn_table_size
* Risked streams: 0, 1, 5, and max_risked_streams
*/
switch (nybble & 3)
{ case 0: *dyn_table_size = 0; break;
case 1: *dyn_table_size /= 4; break;
case 2: *dyn_table_size /= 2; break;
default: break;
}
if (*dyn_table_size)
switch ((nybble >> 2) & 3)
{ case 0: *max_risked_streams = 0; break;
case 1: *max_risked_streams = MIN(1, *max_risked_streams); break;
case 2: *max_risked_streams = MIN(5, *max_risked_streams); break;
default: break;
}
else
*max_risked_streams = 0;
LSQ_INFO("randomized QPACK %s settings: table size: %u; risked "
"streams: %u", side, *dyn_table_size, *max_risked_streams);
}
static int
init_http (struct ietf_full_conn *conn)
{
unsigned max_risked_streams, dyn_table_size;
fiu_return_on("full_conn_ietf/init_http", -1);
lsquic_qeh_init(&conn->ifc_qeh, &conn->ifc_conn);
if (conn->ifc_settings->es_qpack_experiment)
{
conn->ifc_qeh.qeh_exp_rec = lsquic_qpack_exp_new();
if (conn->ifc_qeh.qeh_exp_rec)
{
conn->ifc_qeh.qeh_exp_rec->qer_flags |= QER_SERVER & conn->ifc_flags;
conn->ifc_qeh.qeh_exp_rec->qer_flags |= QER_ENCODER;
}
}
if (0 == avail_streams_count(conn, conn->ifc_flags & IFC_SERVER,
SD_UNI))
{
@ -3521,8 +3597,14 @@ init_http (struct ietf_full_conn *conn)
ABORT_WARN("cannot create outgoing control stream");
return -1;
}
dyn_table_size = conn->ifc_settings->es_qpack_dec_max_size;
max_risked_streams = conn->ifc_settings->es_qpack_dec_max_blocked;
if (conn->ifc_settings->es_qpack_experiment == 2)
randomize_qpack_settings(conn, "decoder", &dyn_table_size,
&max_risked_streams);
if (0 != lsquic_hcso_write_settings(&conn->ifc_hcso,
&conn->ifc_enpub->enp_settings, conn->ifc_flags & IFC_SERVER))
conn->ifc_settings->es_max_header_list_size, dyn_table_size,
max_risked_streams, conn->ifc_flags & IFC_SERVER))
{
ABORT_WARN("cannot write SETTINGS");
return -1;
@ -3537,8 +3619,7 @@ init_http (struct ietf_full_conn *conn)
}
if (0 != lsquic_qdh_init(&conn->ifc_qdh, &conn->ifc_conn,
conn->ifc_flags & IFC_SERVER, conn->ifc_enpub,
conn->ifc_settings->es_qpack_dec_max_size,
conn->ifc_settings->es_qpack_dec_max_blocked))
dyn_table_size, max_risked_streams))
{
ABORT_WARN("cannot initialize QPACK decoder");
return -1;
@ -4715,6 +4796,7 @@ process_ack (struct ietf_full_conn *conn, struct ack_info *acki,
unsigned n_unacked;
int one_rtt_acked;
CONN_STATS(in.n_acks_proc, 1);
LSQ_DEBUG("Processing ACK");
one_rtt_acked = lsquic_send_ctl_1rtt_acked(&conn->ifc_send_ctl);
n_unacked = lsquic_send_ctl_n_unacked(&conn->ifc_send_ctl);
@ -5413,6 +5495,8 @@ process_stream_frame (struct ietf_full_conn *conn,
}
EV_LOG_STREAM_FRAME_IN(LSQUIC_LOG_CONN_ID, stream_frame);
LSQ_DEBUG("Got stream frame for stream #%"PRIu64, stream_frame->stream_id);
CONN_STATS(in.stream_frames, 1);
CONN_STATS(in.stream_data_sz, stream_frame->data_frame.df_size);
if (conn_is_send_only_stream(conn, stream_frame->stream_id))
{
@ -5515,6 +5599,8 @@ process_ack_frame (struct ietf_full_conn *conn,
int parsed_len;
lsquic_time_t warn_time;
CONN_STATS(in.n_acks, 1);
if (conn->ifc_flags & IFC_HAVE_SAVED_ACK)
new_acki = conn->ifc_pub.mm->acki;
else
@ -5561,6 +5647,7 @@ process_ack_frame (struct ietf_full_conn *conn,
{
if (0 == lsquic_merge_acks(&conn->ifc_ack, new_acki))
{
CONN_STATS(in.n_acks_merged, 1);
++conn->ifc_ias.n_acks;
LSQ_DEBUG("merged into saved ACK, getting %s",
(lsquic_acki2str(&conn->ifc_ack, conn->ifc_pub.mm->ack_str,
@ -6888,6 +6975,8 @@ process_regular_packet (struct ietf_full_conn *conn,
if (HETY_RETRY == packet_in->pi_header_type)
return process_retry_packet(conn, packet_in);
CONN_STATS(in.packets, 1);
pns = lsquic_hety2pns[ packet_in->pi_header_type ];
if (pns == PNS_INIT)
conn->ifc_conn.cn_esf.i->esfi_set_iscid(conn->ifc_conn.cn_enc_session,
@ -6957,12 +7046,14 @@ process_regular_packet (struct ietf_full_conn *conn,
}
else if (dec_packin == DECPI_BADCRYPT)
{
CONN_STATS(in.undec_packets, 1);
LSQ_INFO("could not decrypt packet (type %s)",
lsquic_hety2str[packet_in->pi_header_type]);
return 0;
}
else
{
CONN_STATS(in.undec_packets, 1);
LSQ_INFO("packet is too short to be decrypted");
return 0;
}
@ -7095,12 +7186,14 @@ process_regular_packet (struct ietf_full_conn *conn,
}
return 0;
case REC_ST_DUP:
CONN_STATS(in.dup_packets, 1);
LSQ_INFO("packet %"PRIu64" is a duplicate", packet_in->pi_packno);
return 0;
default:
assert(0);
/* Fall through */
case REC_ST_ERR:
CONN_STATS(in.err_packets, 1);
LSQ_INFO("error processing packet %"PRIu64, packet_in->pi_packno);
return -1;
}
@ -7247,6 +7340,7 @@ ietf_full_conn_ci_packet_in (struct lsquic_conn *lconn,
{
struct ietf_full_conn *conn = (struct ietf_full_conn *) lconn;
CONN_STATS(in.bytes, packet_in->pi_data_sz);
set_earliest_idle_alarm(conn, conn->ifc_idle_to
? packet_in->pi_received + conn->ifc_idle_to : 0);
if (0 == (conn->ifc_flags & IFC_IMMEDIATE_CLOSE_FLAGS))
@ -7353,6 +7447,8 @@ ietf_full_conn_ci_packet_sent (struct lsquic_conn *lconn,
conn->ifc_enpub->enp_crand))) * 1000000);
conn->ifc_pub.bytes_out += lsquic_packet_out_sent_sz(&conn->ifc_conn,
packet_out);
CONN_STATS(out.packets, 1);
CONN_STATS(out.bytes, lsquic_packet_out_sent_sz(lconn, packet_out));
}
@ -7773,6 +7869,8 @@ ietf_full_conn_ci_tick (struct lsquic_conn *lconn, lsquic_time_t now)
} \
} while (0)
CONN_STATS(n_ticks, 1);
if (conn->ifc_flags & IFC_HAVE_SAVED_ACK)
{
(void) /* If there is an error, we'll fail shortly */
@ -8328,6 +8426,57 @@ ietf_full_conn_ci_count_garbage (struct lsquic_conn *lconn, size_t garbage_sz)
}
#if LSQUIC_CONN_STATS
static const struct conn_stats *
ietf_full_conn_ci_get_stats (struct lsquic_conn *lconn)
{
struct ietf_full_conn *conn = (struct ietf_full_conn *) lconn;
return &conn->ifc_stats;
}
#include "lsquic_cong_ctl.h"
static void
ietf_full_conn_ci_log_stats (struct lsquic_conn *lconn)
{
struct ietf_full_conn *conn = (struct ietf_full_conn *) lconn;
struct batch_size_stats *const bs = &conn->ifc_enpub->enp_batch_size_stats;
struct conn_stats diff_stats;
uint64_t cwnd;
char cidstr[MAX_CID_LEN * 2 + 1];
if (!conn->ifc_last_stats)
{
conn->ifc_last_stats = calloc(1, sizeof(*conn->ifc_last_stats));
if (!conn->ifc_last_stats)
return;
LSQ_DEBUG("allocated last stats");
}
cwnd = conn->ifc_send_ctl.sc_ci->cci_get_cwnd(
conn->ifc_send_ctl.sc_cong_ctl);
lsquic_conn_stats_diff(&conn->ifc_stats, conn->ifc_last_stats, &diff_stats);
lsquic_logger_log1(LSQ_LOG_NOTICE, LSQLM_CONN_STATS,
"%s: ticks: %lu; cwnd: %"PRIu64"; conn flow: max: %"PRIu64
", avail: %"PRIu64"; packets: sent: %lu, lost: %lu, retx: %lu, rcvd: %lu"
"; batch: count: %u; min: %u; max: %u; avg: %.2f",
(lsquic_cid2str(LSQUIC_LOG_CONN_ID, cidstr), cidstr),
diff_stats.n_ticks, cwnd,
conn->ifc_pub.conn_cap.cc_max,
lsquic_conn_cap_avail(&conn->ifc_pub.conn_cap),
diff_stats.out.packets, diff_stats.out.lost_packets,
diff_stats.out.retx_packets, diff_stats.in.packets,
bs->count, bs->min, bs->max, bs->avg);
*conn->ifc_last_stats = conn->ifc_stats;
memset(bs, 0, sizeof(*bs));
}
#endif
#define IETF_FULL_CONN_FUNCS \
.ci_abort = ietf_full_conn_ci_abort, \
.ci_abort_error = ietf_full_conn_ci_abort_error, \
@ -8376,6 +8525,10 @@ static const struct conn_iface ietf_full_conn_iface = {
.ci_packet_not_sent = ietf_full_conn_ci_packet_not_sent,
.ci_packet_sent = ietf_full_conn_ci_packet_sent,
.ci_packet_too_large = ietf_full_conn_ci_packet_too_large,
#if LSQUIC_CONN_STATS
.ci_get_stats = ietf_full_conn_ci_get_stats,
.ci_log_stats = ietf_full_conn_ci_log_stats,
#endif
};
static const struct conn_iface *ietf_full_conn_iface_ptr =
&ietf_full_conn_iface;
@ -8385,6 +8538,10 @@ static const struct conn_iface ietf_full_conn_prehsk_iface = {
.ci_next_packet_to_send = ietf_full_conn_ci_next_packet_to_send_pre_hsk,
.ci_packet_not_sent = ietf_full_conn_ci_packet_not_sent_pre_hsk,
.ci_packet_sent = ietf_full_conn_ci_packet_sent_pre_hsk,
#if LSQUIC_CONN_STATS
.ci_get_stats = ietf_full_conn_ci_get_stats,
.ci_log_stats = ietf_full_conn_ci_log_stats,
#endif
};
static const struct conn_iface *ietf_full_conn_prehsk_iface_ptr =
&ietf_full_conn_prehsk_iface;
@ -8512,6 +8669,18 @@ on_settings_frame (void *ctx)
conn->ifc_peer_hq_settings.header_table_size);
max_risked_streams = MIN(conn->ifc_settings->es_qpack_enc_max_blocked,
conn->ifc_peer_hq_settings.qpack_blocked_streams);
if (conn->ifc_settings->es_qpack_experiment == 2)
randomize_qpack_settings(conn, "encoder", &dyn_table_size,
&max_risked_streams);
if (conn->ifc_qeh.qeh_exp_rec)
{
conn->ifc_qeh.qeh_exp_rec->qer_peer_max_size
= conn->ifc_peer_hq_settings.header_table_size;
conn->ifc_qeh.qeh_exp_rec->qer_used_max_size = dyn_table_size;
conn->ifc_qeh.qeh_exp_rec->qer_peer_max_blocked
= conn->ifc_peer_hq_settings.qpack_blocked_streams;
conn->ifc_qeh.qeh_exp_rec->qer_used_max_blocked = max_risked_streams;
}
if (0 != lsquic_qeh_settings(&conn->ifc_qeh,
conn->ifc_peer_hq_settings.header_table_size,
dyn_table_size, max_risked_streams, conn->ifc_flags & IFC_SERVER))

View file

@ -95,15 +95,26 @@ lsquic_hcsi_reader_feed (struct hcsi_reader *reader, const void *buf,
reader->hr_frame_type);
return -1;
default:
if (!(reader->hr_frame_type >= 0xB &&
(reader->hr_frame_type - 0xB) % 0x1F == 0))
LSQ_INFO("unknown frame type 0x%"PRIX64" -- skipping",
reader->hr_frame_type);
{
/* From [draft-ietf-quic-http-31] Section 7.2.8:
" Frame types of the format "0x1f * N + 0x21" for non-negative
" integer values of N are reserved to exercise the requirement
" that unknown types be ignored
*/
enum lsq_log_level L;
if (!(reader->hr_frame_type >= 0x21 &&
(reader->hr_frame_type - 0x21) % 0x1F == 0))
/* Non-grease: log with higher level: */
L = LSQ_LOG_INFO;
else
L = LSQ_LOG_DEBUG;
LSQ_LOG(L, "unknown frame type 0x%"PRIX64": will skip "
"%"PRIu64" bytes", reader->hr_frame_type,
reader->hr_frame_length);
reader->hr_state = HR_SKIPPING;
LSQ_DEBUG("unknown frame 0x%"PRIX64": will skip %"PRIu64" bytes",
reader->hr_frame_type, reader->hr_frame_length);
break;
}
}
break;
case HR_READ_VARINT:
reader->hr_u.vint_state.pos = 0;

View file

@ -119,7 +119,8 @@ hcso_setting_type2bits (struct hcso_writer *writer, unsigned setting)
int
lsquic_hcso_write_settings (struct hcso_writer *writer,
const struct lsquic_engine_settings *settings,
unsigned max_header_list_size,
unsigned dyn_table_size, unsigned max_risked_streams,
int is_server)
{
unsigned char *p;
@ -149,36 +150,36 @@ lsquic_hcso_write_settings (struct hcso_writer *writer,
*p++ = HQFT_SETTINGS;
p += frame_size_len;
if (settings->es_max_header_list_size != HQ_DF_MAX_HEADER_LIST_SIZE)
if (max_header_list_size != HQ_DF_MAX_HEADER_LIST_SIZE)
{
/* Write out SETTINGS_MAX_HEADER_LIST_SIZE */
bits = hcso_setting_type2bits(writer, HQSID_MAX_HEADER_LIST_SIZE);
vint_write(p, HQSID_MAX_HEADER_LIST_SIZE, bits, 1 << bits);
p += 1 << bits;
bits = vint_val2bits(settings->es_max_header_list_size);
vint_write(p, settings->es_max_header_list_size, bits, 1 << bits);
bits = vint_val2bits(max_header_list_size);
vint_write(p, max_header_list_size, bits, 1 << bits);
p += 1 << bits;
}
if (settings->es_qpack_dec_max_size != HQ_DF_QPACK_MAX_TABLE_CAPACITY)
if (dyn_table_size != HQ_DF_QPACK_MAX_TABLE_CAPACITY)
{
/* Write out SETTINGS_QPACK_MAX_TABLE_CAPACITY */
bits = hcso_setting_type2bits(writer, HQSID_QPACK_MAX_TABLE_CAPACITY);
vint_write(p, HQSID_QPACK_MAX_TABLE_CAPACITY, bits, 1 << bits);
p += 1 << bits;
bits = vint_val2bits(settings->es_qpack_dec_max_size);
vint_write(p, settings->es_qpack_dec_max_size, bits, 1 << bits);
bits = vint_val2bits(dyn_table_size);
vint_write(p, dyn_table_size, bits, 1 << bits);
p += 1 << bits;
}
if (settings->es_qpack_dec_max_blocked != HQ_DF_QPACK_BLOCKED_STREAMS)
if (max_risked_streams != HQ_DF_QPACK_BLOCKED_STREAMS)
{
/* Write out SETTINGS_QPACK_BLOCKED_STREAMS */
bits = hcso_setting_type2bits(writer, HQSID_QPACK_BLOCKED_STREAMS);
vint_write(p, HQSID_QPACK_BLOCKED_STREAMS, bits, 1 << bits);
p += 1 << bits;
bits = vint_val2bits(settings->es_qpack_dec_max_blocked);
vint_write(p, settings->es_qpack_dec_max_blocked, bits, 1 << bits);
bits = vint_val2bits(max_risked_streams);
vint_write(p, max_risked_streams, bits, 1 << bits);
p += 1 << bits;
}

View file

@ -6,7 +6,6 @@
#ifndef LSQUIC_HCSO_WRITER_H
#define LSQUIC_HCSO_WRITER_H 1
struct lsquic_engine_settings;
struct lsquic_ext_http_prio;
struct lsquic_stream;
@ -24,7 +23,7 @@ struct hcso_writer
int
lsquic_hcso_write_settings (struct hcso_writer *,
const struct lsquic_engine_settings *, int);
unsigned, unsigned, unsigned, int);
int
lsquic_hcso_write_goaway (struct hcso_writer *, lsquic_stream_id_t);

View file

@ -96,6 +96,7 @@ enum lsq_log_level lsq_log_levels[N_LSQUIC_LOGGER_MODULES] = {
[LSQLM_PRIO] = LSQ_LOG_WARN,
[LSQLM_BW_SAMPLER] = LSQ_LOG_WARN,
[LSQLM_PACKET_RESIZE] = LSQ_LOG_WARN,
[LSQLM_CONN_STATS] = LSQ_LOG_WARN,
};
const char *const lsqlm_to_str[N_LSQUIC_LOGGER_MODULES] = {
@ -140,6 +141,7 @@ const char *const lsqlm_to_str[N_LSQUIC_LOGGER_MODULES] = {
[LSQLM_PRIO] = "prio",
[LSQLM_BW_SAMPLER] = "bw-sampler",
[LSQLM_PACKET_RESIZE] = "packet-resize",
[LSQLM_CONN_STATS] = "conn-stats",
};
const char *const lsq_loglevel2str[N_LSQUIC_LOG_LEVELS] = {

View file

@ -87,6 +87,7 @@ enum lsquic_logger_module {
LSQLM_PRIO,
LSQLM_BW_SAMPLER,
LSQLM_PACKET_RESIZE,
LSQLM_CONN_STATS,
N_LSQUIC_LOGGER_MODULES
};

View file

@ -32,6 +32,8 @@
#include "lsquic_conn_public.h"
#include "lsquic_hq.h"
#include "lsquic_parse.h"
#include "lsquic_qpack_exp.h"
#include "lsquic_util.h"
#define LSQUIC_LOGGER_MODULE LSQLM_QDEC_HDL
#define LSQUIC_LOG_CONN_ID lsquic_conn_log_cid(qdh->qdh_conn)
@ -163,6 +165,18 @@ lsquic_qdh_init (struct qpack_dec_hdl *qdh, struct lsquic_conn *conn,
if (enpub->enp_hsi_if->hsi_flags & LSQUIC_HSI_HASH_NAMEVAL)
dec_opts |= LSQPACK_DEC_OPT_HASH_NAMEVAL;
if (enpub->enp_settings.es_qpack_experiment)
{
qdh->qdh_exp_rec = lsquic_qpack_exp_new();
if (qdh->qdh_exp_rec)
{
if (conn->cn_flags & LSCONN_SERVER)
qdh->qdh_exp_rec->qer_flags |= QER_SERVER;
qdh->qdh_exp_rec->qer_used_max_size = dyn_table_size;
qdh->qdh_exp_rec->qer_used_max_blocked = max_risked_streams;
}
}
qdh->qdh_conn = conn;
lsquic_frab_list_init(&qdh->qdh_fral, 0x400, NULL, NULL, NULL);
lsqpack_dec_init(&qdh->qdh_decoder, (void *) conn, dyn_table_size,
@ -189,12 +203,29 @@ lsquic_qdh_init (struct qpack_dec_hdl *qdh, struct lsquic_conn *conn,
}
static void
qdh_log_and_clean_exp_rec (struct qpack_dec_hdl *qdh)
{
char buf[0x400];
qdh->qdh_exp_rec->qer_comp_ratio = lsqpack_dec_ratio(&qdh->qdh_decoder);
/* Naughty: poking inside the decoder, it's not exposed. (Should it be?) */
qdh->qdh_exp_rec->qer_peer_max_size = qdh->qdh_decoder.qpd_cur_max_capacity;
(void) lsquic_qpack_exp_to_xml(qdh->qdh_exp_rec, buf, sizeof(buf));
LSQ_NOTICE("%s", buf);
lsquic_qpack_exp_destroy(qdh->qdh_exp_rec);
qdh->qdh_exp_rec = NULL;
}
void
lsquic_qdh_cleanup (struct qpack_dec_hdl *qdh)
{
if (qdh->qdh_flags & QDH_INITIALIZED)
{
LSQ_DEBUG("cleanup");
if (qdh->qdh_exp_rec)
qdh_log_and_clean_exp_rec(qdh);
lsqpack_dec_cleanup(&qdh->qdh_decoder);
lsquic_frab_list_cleanup(&qdh->qdh_fral);
qdh->qdh_flags &= ~QDH_INITIALIZED;
@ -498,6 +529,22 @@ qdh_prepare_decode (void *stream_p, struct lsxpack_header *xhdr, size_t space)
}
static void
qdh_maybe_set_user_agent (struct qpack_dec_hdl *qdh,
const struct lsxpack_header *xhdr)
{
/* Flipped: we are the *decoder* */
const char *const name = qdh->qdh_exp_rec->qer_flags & QER_SERVER ?
"user-agent" : "server";
const size_t len = qdh->qdh_exp_rec->qer_flags & QER_SERVER ? 10 : 6;
if (len == xhdr->name_len
&& 0 == memcmp(name, lsxpack_header_get_name(xhdr), len))
qdh->qdh_exp_rec->qer_user_agent
= strndup(lsxpack_header_get_value(xhdr), xhdr->val_len);
}
static int
qdh_process_header (void *stream_p, struct lsxpack_header *xhdr)
{
@ -524,6 +571,8 @@ qdh_process_header (void *stream_p, struct lsxpack_header *xhdr)
(char *) stream->conn_pub->mm->acki,
sizeof(*stream->conn_pub->mm->acki));
}
else if (qdh->qdh_exp_rec && !qdh->qdh_exp_rec->qer_user_agent)
qdh_maybe_set_user_agent(qdh, xhdr);
return qdh->qdh_enpub->enp_hsi_if->hsi_process_header(u->ctx.hset, xhdr);
}
@ -662,9 +711,21 @@ lsquic_qdh_header_in_begin (struct qpack_dec_hdl *qdh,
};
stream->sm_hblock_ctx = u;
if (qdh->qdh_exp_rec)
{
const lsquic_time_t now = lsquic_time_now();
if (0 == qdh->qdh_exp_rec->qer_hblock_count)
qdh->qdh_exp_rec->qer_first_req = now;
qdh->qdh_exp_rec->qer_last_req = now;
++qdh->qdh_exp_rec->qer_hblock_count;
qdh->qdh_exp_rec->qer_hblock_size += bufsz;
}
dec_buf_sz = sizeof(dec_buf);
rhs = lsqpack_dec_header_in(&qdh->qdh_decoder, stream, stream->id,
header_size, buf, bufsz, dec_buf, &dec_buf_sz);
if (qdh->qdh_exp_rec)
qdh->qdh_exp_rec->qer_peer_max_blocked += rhs == LQRHS_BLOCKED;
return qdh_header_read_results(qdh, stream, rhs, dec_buf, dec_buf_sz);
}
@ -681,9 +742,13 @@ lsquic_qdh_header_in_continue (struct qpack_dec_hdl *qdh,
if (qdh->qdh_flags & QDH_INITIALIZED)
{
if (qdh->qdh_exp_rec)
qdh->qdh_exp_rec->qer_hblock_size += bufsz;
dec_buf_sz = sizeof(dec_buf);
rhs = lsqpack_dec_header_read(&qdh->qdh_decoder, stream,
buf, bufsz, dec_buf, &dec_buf_sz);
if (qdh->qdh_exp_rec)
qdh->qdh_exp_rec->qer_peer_max_blocked += rhs == LQRHS_BLOCKED;
return qdh_header_read_results(qdh, stream, rhs, dec_buf, dec_buf_sz);
}
else

View file

@ -14,6 +14,7 @@ struct lsquic_conn;
struct lsquic_stream;
struct lsquic_stream_if;
struct lsquic_engine_public;
struct qpack_exp_record;
struct qpack_dec_hdl
@ -33,6 +34,7 @@ struct qpack_dec_hdl
void *qdh_hsi_ctx;
void (*qdh_on_dec_sent_func)(void *);
void *qdh_on_dec_sent_ctx;
struct qpack_exp_record *qdh_exp_rec;
};
int

View file

@ -26,6 +26,8 @@
#include "lsqpack.h"
#include "lsxpack_header.h"
#include "lsquic_conn.h"
#include "lsquic_qpack_exp.h"
#include "lsquic_util.h"
#include "lsquic_qenc_hdl.h"
#define LSQUIC_LOGGER_MODULE LSQLM_QENC_HDL
@ -140,12 +142,27 @@ lsquic_qeh_settings (struct qpack_enc_hdl *qeh, unsigned max_table_size,
}
static void
qeh_log_and_clean_exp_rec (struct qpack_enc_hdl *qeh)
{
char buf[0x400];
qeh->qeh_exp_rec->qer_comp_ratio = lsqpack_enc_ratio(&qeh->qeh_encoder);
(void) lsquic_qpack_exp_to_xml(qeh->qeh_exp_rec, buf, sizeof(buf));
LSQ_NOTICE("%s", buf);
lsquic_qpack_exp_destroy(qeh->qeh_exp_rec);
qeh->qeh_exp_rec = NULL;
}
void
lsquic_qeh_cleanup (struct qpack_enc_hdl *qeh)
{
if (qeh->qeh_flags & QEH_INITIALIZED)
{
LSQ_DEBUG("cleanup");
if (qeh->qeh_exp_rec)
qeh_log_and_clean_exp_rec(qeh);
lsqpack_enc_cleanup(&qeh->qeh_encoder);
lsquic_frab_list_cleanup(&qeh->qeh_fral);
memset(qeh, 0, sizeof(*qeh));
@ -323,6 +340,28 @@ const struct lsquic_stream_if *const lsquic_qeh_dec_sm_in_if =
&qeh_dec_sm_in_if;
static void
qeh_maybe_set_user_agent (struct qpack_enc_hdl *qeh,
const struct lsquic_http_headers *headers)
{
const char *const name = qeh->qeh_exp_rec->qer_flags & QER_SERVER ?
"server" : "user-agent";
const size_t len = qeh->qeh_exp_rec->qer_flags & QER_SERVER ? 6 : 10;
int i;
for (i = 0; i < headers->count; ++i)
if (len == headers->headers[i].name_len
&& 0 == memcmp(name,
lsxpack_header_get_name(&headers->headers[i]), len))
{
qeh->qeh_exp_rec->qer_user_agent = strndup(
lsxpack_header_get_value(&headers->headers[i]),
headers->headers[i].val_len);
break;
}
}
static enum qwh_status
qeh_write_headers (struct qpack_enc_hdl *qeh, lsquic_stream_id_t stream_id,
unsigned seqno, const struct lsquic_http_headers *headers,
@ -347,6 +386,17 @@ qeh_write_headers (struct qpack_enc_hdl *qeh, lsquic_stream_id_t stream_id,
return QWH_ERR;
#endif
if (qeh->qeh_exp_rec)
{
const lsquic_time_t now = lsquic_time_now();
if (qeh->qeh_exp_rec->qer_hblock_count == 0)
qeh->qeh_exp_rec->qer_first_req = now;
qeh->qeh_exp_rec->qer_last_req = now;
++qeh->qeh_exp_rec->qer_hblock_count;
if (!qeh->qeh_exp_rec->qer_user_agent)
qeh_maybe_set_user_agent(qeh, headers);
}
s = lsqpack_enc_start_header(&qeh->qeh_encoder, stream_id, 0);
if (s != 0)
{
@ -443,6 +493,8 @@ qeh_write_headers (struct qpack_enc_hdl *qeh, lsquic_stream_id_t stream_id,
*prefix_sz = (size_t) nw;
}
*headers_sz = p - buf;
if (qeh->qeh_exp_rec)
qeh->qeh_exp_rec->qer_hblock_size += p - buf;
if (lsquic_frab_list_empty(&qeh->qeh_fral))
{
LSQ_DEBUG("all %zd bytes of encoder stream written out; header block "

View file

@ -13,6 +13,7 @@
struct lsquic_conn;
struct lsquic_stream;
struct lsquic_stream_if;
struct qpack_exp_record;
struct qpack_enc_hdl
{
@ -26,6 +27,7 @@ struct qpack_enc_hdl
struct lsquic_stream *qeh_enc_sm_out;
struct frab_list qeh_fral;
struct lsquic_stream *qeh_dec_sm_in;
struct qpack_exp_record *qeh_exp_rec;
size_t qeh_tsu_sz;
unsigned char qeh_tsu_buf[LSQPACK_LONGEST_SDTC];
};

View file

@ -0,0 +1,66 @@
/* Copyright (c) 2017 - 2020 LiteSpeed Technologies Inc. See LICENSE. */
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include "lsquic_int_types.h"
#include "lsquic_qpack_exp.h"
struct qpack_exp_record *
lsquic_qpack_exp_new (void)
{
return calloc(1, sizeof(struct qpack_exp_record));
}
void
lsquic_qpack_exp_destroy (struct qpack_exp_record *exp)
{
free(exp->qer_user_agent);
free(exp);
}
static const char *const flag2tag[] = {
[QER_SERVER|QER_ENCODER] = "server",
[QER_SERVER|0] = "user-agent",
[0 |QER_ENCODER] = "user-agent",
[0 |0] = "server",
};
int
lsquic_qpack_exp_to_xml (const struct qpack_exp_record *exp, char *buf,
size_t buf_sz)
{
const char *const tag = flag2tag[exp->qer_flags & (QER_SERVER|QER_ENCODER)];
return snprintf(buf, buf_sz,
"<qpack-exp>"
"<role>%s</role>"
"<duration units=\"ms\">%"PRIu64"</duration>"
"<hblock-count>%u</hblock-count>"
"<hblock-size>%u</hblock-size>"
"<peer-max-size>%u</peer-max-size>"
"<used-max-size>%u</used-max-size>"
"<peer-max-blocked>%u</peer-max-blocked>"
"<used-max-blocked>%u</used-max-blocked>"
"<comp-ratio>%.3f</comp-ratio>"
/* We just print unescaped string... */
"<%s>%s</%s>"
"</qpack-exp>"
, exp->qer_flags & QER_ENCODER ? "encoder" : "decoder"
, (exp->qer_last_req - exp->qer_first_req) / 1000 /* Milliseconds */
, exp->qer_hblock_count
, exp->qer_hblock_size
, exp->qer_peer_max_size
, exp->qer_used_max_size
, exp->qer_peer_max_blocked
, exp->qer_used_max_blocked
, exp->qer_comp_ratio
, tag
, exp->qer_user_agent ? exp->qer_user_agent : ""
, tag
);
}

View file

@ -0,0 +1,71 @@
/* Copyright (c) 2017 - 2020 LiteSpeed Technologies Inc. See LICENSE. */
/* QPACK Experiment record */
#ifndef LSQUIC_QPACK_EXP_H
#define LSQUIC_QPACK_EXP_H
struct qpack_exp_record
{
enum {
QER_SERVER = 1 << 0, /* Client or server */
QER_ENCODER = 1 << 1, /* If not set, this is decoder */
} qer_flags;
/* Timestamp of the first request */
lsquic_time_t qer_first_req;
/* Timestamp of the last request */
lsquic_time_t qer_last_req;
/* Number of header blocks passed through the encoder or the decoder */
unsigned qer_hblock_count;
/* Cumulative size of all header blocks processed */
unsigned qer_hblock_size;
/* For encoder, the "peer max size" is the maximum size advertised by
* the peer and the "used max size" is the maximum size that our
* encoder ends up using (the value selected by experiment).
*
* For decoder, the "used max size" is the maximum size we advertize
* (selecte by experiment), while the "peer max size" is the size the
* encoder uses as given by the value of the last TSU instruction.
*/
unsigned qer_peer_max_size;
unsigned qer_used_max_size;
/* For encoder, the "peer max blocked" is the maximum number of blocked
* streams advertised by the peer, while the "used max blocked" is the
* self-imposed limit (selected by experiment).
*
* For decoder, the "used max blocked" is the maximum number of blocked
* streams that we advertised (selected by experiment) and the "peer max
* blocked" is the total number of times a header was blocked. Note
* that the latter does not count the duration of blockage and it may be
* insignificant. For example, a single packet may have header block
* packaged before the required encoder stream update, in which case the
* header block will be blocked and then unblocked immediately.
*/
unsigned qer_peer_max_blocked;
unsigned qer_used_max_blocked;
/* The compression ratio is taken when experiment concludes via
* lsqpack_enc_ratio() or lsqpack_dec_ratio().
*/
float qer_comp_ratio;
/* Either 'Server:' or 'User-Agent:' */
char *qer_user_agent;
};
struct qpack_exp_record *
lsquic_qpack_exp_new (void);
void
lsquic_qpack_exp_destroy (struct qpack_exp_record *);
/* Returns same as snprintf(3) */
int
lsquic_qpack_exp_to_xml (const struct qpack_exp_record *, char *, size_t);
#endif

View file

@ -899,6 +899,9 @@ send_ctl_handle_regular_lost_packet (struct lsquic_send_ctl *ctl,
packet_sz = packet_out_sent_sz(packet_out);
++ctl->sc_loss_count;
#if LSQUIC_CONN_STATS
++ctl->sc_conn_pub->conn_stats->out.lost_packets;
#endif
if (packet_out->po_frame_types & (1 << QUIC_FRAME_ACK))
{

View file

@ -29,6 +29,7 @@
#include "lsquic_frame_reader.h"
#if LSQUIC_CONN_STATS
#include "lsquic_int_types.h"
#include "lsquic_hash.h"
#include "lsquic_conn.h"
#endif