Inverse of a finite difference
In the calculus of finite differences, the indefinite sum operator (also known as the antidifference operator), denoted by
∑
x
{\textstyle \sum _{x}}
or
Δ
−
1
{\displaystyle \Delta ^{-1}}
,[ 1] [ 2] is the linear operator that is the inverse of the forward difference operator
Δ
{\displaystyle \Delta }
. It relates to the forward difference operator as the indefinite integral relates to the derivative . Thus,[ 3]
Δ
∑
x
f
(
x
)
=
f
(
x
)
.
{\displaystyle \Delta \sum _{x}f(x)=f(x)\,.}
More explicitly, if
∑
x
f
(
x
)
=
F
(
x
)
{\textstyle \sum _{x}f(x)=F(x)}
, then
F
(
x
+
1
)
−
F
(
x
)
=
f
(
x
)
.
{\displaystyle F(x+1)-F(x)=f(x)\,.}
The solution is not unique; it is determined only up to an additive periodic function with period 1. Therefore, each indefinite sum represents a family of functions.
Fundamental theorem of the calculus of finite differences [ edit ]
Indefinite sums can be used to calculate definite sums with the formula:[ 4]
∑
k
=
a
b
f
(
k
)
=
Δ
−
1
f
(
b
+
1
)
−
Δ
−
1
f
(
a
)
.
{\displaystyle \sum _{k=a}^{b}f(k)=\Delta ^{-1}f(b+1)-\Delta ^{-1}f(a).}
The inverse forward difference operator,
Δ
−
1
{\displaystyle \Delta ^{-1}}
, extends the summation up to
x
−
1
{\displaystyle x-1}
:
∑
k
=
0
x
−
1
f
(
k
)
.
{\displaystyle \sum _{k=0}^{x-1}f(k).}
Some authors analytically extend summation for which the upper limit is the argument without a shift:[ 5] [ 6] [ 7]
∑
k
=
1
x
f
(
k
)
.
{\displaystyle \sum _{k=1}^{x}f(k).}
In this case, a closed-form expression
F
(
x
)
{\displaystyle F(x)}
for the sum is a solution of
F
(
x
+
1
)
−
F
(
x
)
=
f
(
x
+
1
)
,
{\displaystyle F(x+1)-F(x)=f(x+1),}
which is called the telescoping equation .[ 8] It is the inverse of the backward difference operator
∇
{\displaystyle \nabla }
,
∇
−
1
{\displaystyle \nabla ^{-1}}
:
F
(
x
)
−
F
(
x
−
1
)
=
f
(
x
)
,
{\displaystyle F(x)-F(x-1)=f(x),}
It is related to the forward antidifference operator using the fundamental theorem of the calculus of finite differences.
The functional equation
F
(
x
+
1
)
−
F
(
x
)
=
f
(
x
)
{\displaystyle F(x+1)-F(x)=f(x)}
does not have a unique solution. If
F
1
(
x
)
{\displaystyle F_{1}(x)}
is a particular solution, then for any function
C
(
x
)
{\displaystyle C(x)}
satisfying
C
(
x
+
1
)
=
C
(
x
)
{\displaystyle C(x+1)=C(x)}
(i.e., any 1-periodic function), the function
F
2
(
x
)
=
F
1
(
x
)
+
C
(
x
)
{\displaystyle F_{2}(x)=F_{1}(x)+C(x)}
is also a solution. Therefore, the indefinite sum operator defines a family of functions differing by an arbitrary 1-periodic component,
C
(
x
)
{\displaystyle C(x)}
.
To select the unique canonical solution up to an additive constant
C
{\displaystyle C}
(instead of up to the additive 1-periodic function
C
(
x
)
{\displaystyle C(x)}
) one must impose additional constraints.
Complex analysis (Exponential type)[ edit ]
Suppose
f
(
z
)
{\displaystyle f(z)}
is analytic in a vertical strip containing the real axis, and let
F
(
z
)
{\displaystyle F(z)}
be an analytic solution of
F
(
z
+
1
)
−
F
(
z
)
=
f
(
z
)
{\displaystyle F(z+1)-F(z)=f(z)}
in that strip. To ensure uniqueness, we require
F
(
z
)
{\displaystyle F(z)}
to be of minimal growth, specifically to be of exponential type less than
2
π
{\displaystyle 2\pi }
in the imaginary direction. That is, there exist constants
M
>
0
{\displaystyle M>0}
and
ϵ
>
0
{\displaystyle \epsilon >0}
such that
|
F
(
z
)
|
≤
M
e
(
2
π
−
ϵ
)
|
ℑ
(
z
)
|
{\displaystyle |F(z)|\leq Me^{(2\pi -\epsilon )|\Im (z)|}}
as
|
ℑ
(
z
)
|
→
∞
{\displaystyle |\Im (z)|\to \infty }
.[ 9] [ 10]
Now let
F
1
(
z
)
{\displaystyle F_{1}(z)}
and
F
2
(
z
)
{\displaystyle F_{2}(z)}
be two analytic solutions satisfying this growth condition. Their difference
C
(
z
)
=
F
1
(
z
)
−
F
2
(
z
)
{\displaystyle C(z)=F_{1}(z)-F_{2}(z)}
is then analytic, 1‑periodic (i.e.,
C
(
z
+
1
)
=
C
(
z
)
{\displaystyle C(z+1)=C(z)}
), and inherits the same exponential type less than
2
π
{\displaystyle 2\pi }
.
A fundamental result in complex analysis states that a non‑constant 1‑periodic entire function must have exponential type at least
2
π
{\displaystyle 2\pi }
. This follows from its Fourier series expansion: if
C
(
z
)
{\displaystyle C(z)}
is non‑constant, its Fourier series contains a term
a
n
e
2
π
i
n
z
{\displaystyle a_{n}e^{2\pi inz}}
with
n
≠
0
{\displaystyle n\neq 0}
, which has type
2
π
|
n
|
≥
2
π
{\displaystyle 2\pi |n|\geq 2\pi }
. Since
C
(
z
)
{\displaystyle C(z)}
has type strictly less than
2
π
{\displaystyle 2\pi }
, it cannot contain any such term and therefore must be constant.
Consequently, under this minimal growth condition, any two solutions differ by at most a constant. Hence
F
(
z
)
{\displaystyle F(z)}
is unique up to an additive constant
C
{\displaystyle C}
.
Relationship to Indefinite products [ edit ]
The indefinite product operator, denoted by
∏
x
{\displaystyle \prod _{x}}
, is the multiplicative analogue of the indefinite sum. If
∏
x
f
(
x
)
=
F
(
x
)
{\displaystyle \prod _{x}f(x)=F(x)}
, then:
F
(
x
+
1
)
F
(
x
)
=
f
(
x
)
.
{\displaystyle {\frac {F(x+1)}{F(x)}}=f(x).}
Its common discrete analog is
∏
k
=
1
x
−
1
f
(
k
)
{\displaystyle \prod _{k=1}^{x-1}f(k)}
. The two operators are related by:
∏
x
f
(
x
)
=
exp
(
∑
x
ln
f
(
x
)
)
,
{\displaystyle \prod _{x}f(x)=\exp \left(\sum _{x}\ln f(x)\right),}
∑
x
f
(
x
)
=
ln
(
∏
x
exp
(
f
(
x
)
)
)
.
{\displaystyle \sum _{x}f(x)=\ln \left(\prod _{x}\exp(f(x))\right).}
Expansions and Definitions [ edit ]
The Laplace summation formula is a formal asymptotic expansion (generally convergent only for polynomials) of the inverse forward difference
Δ
−
1
f
(
x
)
{\displaystyle \Delta ^{-1}f(x)}
:[ 11] [ 12]
∑
x
f
(
x
)
=
∫
0
x
f
(
t
)
d
t
−
∑
k
=
1
∞
c
k
Δ
k
−
1
f
(
x
)
k
!
+
C
{\displaystyle \sum _{x}f(x)=\int _{0}^{x}f(t)dt-\sum _{k=1}^{\infty }{\frac {c_{k}\Delta ^{k-1}f(x)}{k!}}+C}
where
c
k
=
∫
0
1
(
x
)
k
d
x
{\displaystyle c_{k}=\int _{0}^{1}(x)_{k}dx}
are the Cauchy numbers of the first kind.
(
x
)
k
=
Γ
(
x
+
1
)
Γ
(
x
−
k
+
1
)
{\displaystyle (x)_{k}={\frac {\Gamma (x+1)}{\Gamma (x-k+1)}}}
is the falling factorial .
The inverse forward difference operator,
Δ
−
1
f
(
x
)
{\displaystyle \Delta ^{-1}f(x)}
, can be expressed formally (generally convergent only for polynomials) by its Newton series expansion:
∑
x
f
(
x
)
=
∑
k
=
1
∞
(
x
k
)
Δ
k
−
1
f
(
0
)
+
C
=
∑
k
=
1
∞
Δ
k
−
1
f
(
0
)
k
!
(
x
)
k
+
C
,
{\displaystyle \sum _{x}f(x)=\sum _{k=1}^{\infty }{\binom {x}{k}}\Delta ^{k-1}f\left(0\right)+C=\sum _{k=1}^{\infty }{\frac {\Delta ^{k-1}f(0)}{k!}}(x)_{k}+C,}
Given that
f
(
x
)
{\displaystyle f(x)}
can be represented by its Maclaurin Series expansion, the Taylor series about
0
{\displaystyle 0}
, it is sometimes possible to represent the indefinite sum using Bernoulli polynomials because
∑
x
x
a
=
B
a
+
1
(
x
)
a
+
1
+
C
{\displaystyle \sum _{x}x^{a}={\frac {B_{a+1}(x)}{a+1}}+C}
:
∑
x
f
(
x
)
=
∑
n
=
1
∞
f
(
n
−
1
)
(
0
)
n
!
B
n
(
x
)
+
C
.
{\displaystyle \sum _{x}f(x)=\sum _{n=1}^{\infty }{\frac {f^{(n-1)}(0)}{n!}}B_{n}(x)+C.}
Müller-Schleicher Axiomatic definition[ edit ]
If
f
(
x
)
{\displaystyle f(x)}
is analytic on the right half-plane and satisfies the decay condition
lim
x
→
+
∞
f
(
x
)
=
0
{\displaystyle \lim _{x\to {+\infty }}f(x)=0}
, the analytic continuation of
∇
−
1
f
(
x
)
=
∑
k
=
1
x
f
(
k
)
{\displaystyle \nabla ^{-1}f(x)=\sum _{k=1}^{x}f(k)}
is given by:[ 5]
∇
−
1
f
(
x
)
=
∑
n
=
1
∞
(
f
(
n
)
−
f
(
n
+
x
)
)
+
C
.
{\displaystyle \nabla ^{-1}f(x)=\sum _{n=1}^{\infty }\left(f(n)-f(n+x)\right)+C.}
This formula is derived from axioms presented in the paper based on fractional sums, which uniquely extends the definition of the summation to complex limits. The decay condition
lim
x
→
+
∞
f
(
x
)
=
0
{\displaystyle \lim _{x\to {+\infty }}f(x)=0}
represents the simplest case of the general asymptotic requirements for the function
f
(
x
)
{\displaystyle f(x)}
.
The Euler–Maclaurin formula extends
∇
−
1
f
(
x
)
=
∑
k
=
1
x
f
(
k
)
{\displaystyle \nabla ^{-1}f(x)=\sum _{k=1}^{x}f(k)}
:[ 6] [ 9]
∇
−
1
f
(
x
)
=
∫
1
x
f
(
t
)
d
t
+
f
(
1
)
+
f
(
x
)
2
+
∑
k
=
1
p
B
2
k
(
2
k
)
!
(
f
(
2
k
−
1
)
(
x
)
−
f
(
2
k
−
1
)
(
1
)
)
+
R
p
+
C
{\displaystyle {\begin{aligned}\nabla ^{-1}f(x)&=\int _{1}^{x}f(t)dt+{\frac {f(1)+f(x)}{2}}\\&\quad +\sum _{k=1}^{p}{\frac {B_{2k}}{(2k)!}}\left(f^{(2k-1)}(x)-f^{(2k-1)}(1)\right)+R_{p}+C\end{aligned}}}
where
B
2
k
{\displaystyle B_{2k}}
are the even Bernoulli numbers ,
p
{\displaystyle p}
is an arbitrary positive integer, and
R
p
{\displaystyle R_{p}}
is the remainder term given by:
R
p
=
(
−
1
)
p
+
1
∫
1
x
f
(
p
)
(
t
)
P
p
(
t
)
p
!
d
t
,
{\displaystyle R_{p}=(-1)^{p+1}\int _{1}^{x}f^{(p)}(t){\frac {P_{p}(t)}{p!}}\,dt,}
with
P
p
(
t
)
=
B
p
(
t
−
⌊
t
⌋
)
{\displaystyle P_{p}(t)=B_{p}(t-\lfloor t\rfloor )}
being the periodized Bernoulli function related to the Bernoulli polynomials .
The indefinite sum
∇
−
1
f
(
x
)
=
∑
k
=
1
x
f
(
k
)
{\displaystyle \nabla ^{-1}f(x)=\sum _{k=1}^{x}f(k)}
can be analytically continued by applying the standard Abel-Plana formula to the finite sum
∑
k
=
1
n
f
(
k
)
{\displaystyle \sum _{k=1}^{n}f(k)}
and then analytically continuing the integer limit
n
{\displaystyle n}
to the variable
x
{\displaystyle x}
. This yields the formula:[ 7]
∇
−
1
f
(
x
)
=
∫
1
x
f
(
t
)
d
t
+
f
(
1
)
+
f
(
x
)
2
+
i
∫
0
∞
(
f
(
x
−
i
t
)
−
f
(
1
−
i
t
)
)
−
(
f
(
x
+
i
t
)
−
f
(
1
+
i
t
)
)
e
2
π
t
−
1
d
t
+
C
{\displaystyle {\begin{aligned}\nabla ^{-1}f(x)&=\int _{1}^{x}f(t)dt+{\frac {f(1)+f(x)}{2}}\\&\quad +i\int _{0}^{\infty }{\frac {\left(f(x-it)-f(1-it)\right)-\left(f(x+it)-f(1+it)\right)}{e^{2\pi t}-1}}dt+C\end{aligned}}}
This analytic continuation is valid when the conditions for the original formula are met. The sufficient conditions are:[ 9] [ 10]
Analyticity:
f
(
z
)
{\displaystyle f(z)}
must be analytic in the closed vertical strip between
ℜ
(
z
)
=
1
{\displaystyle \Re (z)=1}
and
ℜ
(
z
)
=
ℜ
(
x
)
{\displaystyle \Re (z)=\Re (x)}
. The formula provides analytic continuation up to, but not beyond, the nearest singularities of
f
{\displaystyle f}
to the line
ℜ
(
z
)
=
1
{\displaystyle \Re (z)=1}
.
Growth:
f
(
z
)
{\displaystyle f(z)}
must be of exponential type less than
2
π
{\displaystyle 2\pi }
in this strip, satisfying
|
f
(
z
)
|
≤
M
e
(
2
π
−
ϵ
)
|
ℑ
(
z
)
|
{\displaystyle |f(z)|\leq Me^{(2\pi -\epsilon )|\Im (z)|}}
for some
M
>
0
{\displaystyle M>0}
,
ϵ
>
0
{\displaystyle \epsilon >0}
as
|
ℑ
(
z
)
|
→
∞
{\displaystyle |\Im (z)|\to \infty }
.
Choice of the constant term [ edit ]
Analytic Continuation of Discrete Sums [ edit ]
The constant term
C
{\displaystyle C}
, in the context of indefinite sums naturally extending the discrete summation, is often defined based on the respective empty sum .
For the inverse forward difference,
Δ
−
1
f
(
x
)
{\displaystyle \Delta ^{-1}f(x)}
, the typical summation equivalent is
∑
k
=
0
x
−
1
f
(
k
)
{\displaystyle \sum _{k=0}^{x-1}f(k)}
so the empty sum is when
Δ
−
1
f
(
0
)
=
0
{\displaystyle \Delta ^{-1}f(0)=0}
as it correlates to
∑
k
=
0
−
1
f
(
k
)
.
{\displaystyle \sum _{k=0}^{-1}f(k).}
For the inverse backward difference,
∇
−
1
f
(
x
)
{\displaystyle \nabla ^{-1}f(x)}
, the typical summation equivalent is
∑
k
=
1
x
f
(
k
)
{\displaystyle \sum _{k=1}^{x}f(k)}
so the empty sum is when
∇
−
1
f
(
0
)
=
0
{\displaystyle \nabla ^{-1}f(0)=0}
as it correlates to
∑
k
=
1
0
f
(
k
)
.
{\displaystyle \sum _{k=1}^{0}f(k).}
In older texts relating to Bernoulli polynomials (predating more modern analytic techniques) the constant
C
{\displaystyle C}
was often fixed using integral conditions.
Let
F
(
x
)
=
∑
x
f
(
x
)
+
C
{\displaystyle F(x)=\sum _{x}f(x)+C}
Then the constant
C
{\displaystyle C}
is fixed from the condition
∫
0
1
F
(
x
)
d
x
=
0
{\displaystyle \int _{0}^{1}F(x)\,dx=0}
or
∫
1
2
F
(
x
)
d
x
=
0
{\displaystyle \int _{1}^{2}F(x)\,dx=0}
Alternatively, Ramanujan summation can be used:
∑
x
≥
1
ℜ
f
(
x
)
=
−
f
(
0
)
−
F
(
0
)
{\displaystyle \sum _{x\geq 1}^{\Re }f(x)=-f(0)-F(0)}
or at 1
∑
x
≥
1
ℜ
f
(
x
)
=
−
F
(
1
)
{\displaystyle \sum _{x\geq 1}^{\Re }f(x)=-F(1)}
respectively.[ 13] [ 14]
Indefinite summation by parts:[ 15]
∑
x
f
(
x
)
Δ
g
(
x
)
=
f
(
x
)
g
(
x
)
−
∑
x
(
g
(
x
)
+
Δ
g
(
x
)
)
Δ
f
(
x
)
{\displaystyle \sum _{x}f(x)\Delta g(x)=f(x)g(x)-\sum _{x}(g(x)+\Delta g(x))\Delta f(x)}
∑
x
f
(
x
)
Δ
g
(
x
)
+
∑
x
g
(
x
)
Δ
f
(
x
)
=
f
(
x
)
g
(
x
)
−
∑
x
Δ
f
(
x
)
Δ
g
(
x
)
{\displaystyle \sum _{x}f(x)\Delta g(x)+\sum _{x}g(x)\Delta f(x)=f(x)g(x)-\sum _{x}\Delta f(x)\Delta g(x)}
Definite summation by parts:
∑
i
=
a
b
f
(
i
)
Δ
g
(
i
)
=
f
(
b
+
1
)
g
(
b
+
1
)
−
f
(
a
)
g
(
a
)
−
∑
i
=
a
b
g
(
i
+
1
)
Δ
f
(
i
)
{\displaystyle \sum _{i=a}^{b}f(i)\Delta g(i)=f(b+1)g(b+1)-f(a)g(a)-\sum _{i=a}^{b}g(i+1)\Delta f(i)}
If
T
{\displaystyle T}
is a period of function
f
(
x
)
{\displaystyle f(x)}
then
∑
x
f
(
T
x
)
=
x
f
(
T
x
)
+
C
.
{\displaystyle \sum _{x}f(Tx)=xf(Tx)+C.}
If
T
{\displaystyle T}
is an antiperiod of function
f
(
x
)
{\displaystyle f(x)}
, that is
f
(
x
+
T
)
=
−
f
(
x
)
{\displaystyle f(x+T)=-f(x)}
then
∑
x
f
(
T
x
)
=
−
1
2
f
(
T
x
)
+
C
.
{\displaystyle \sum _{x}f(Tx)=-{\frac {1}{2}}f(Tx)+C.}
The unique analytic continuation of
F
(
z
)
=
∇
−
1
f
(
z
)
{\displaystyle F(z)=\nabla ^{-1}f(z)}
defined as
F
(
x
)
−
F
(
x
−
1
)
=
f
(
x
)
{\displaystyle F(x)-F(x-1)=f(x)}
with exponential type less than
2
π
{\displaystyle 2\pi }
in the imaginary direction where
f
(
z
)
{\displaystyle f(z)}
is entire and the constant term
C
{\displaystyle C}
is chosen such that
F
(
0
)
=
0
{\displaystyle F(0)=0}
(the empty sum condition),
F
(
z
)
{\displaystyle F(z)}
satisfies a reflection formula .
If
f
(
z
)
{\displaystyle f(z)}
is an odd function (
f
(
−
z
)
=
−
f
(
z
)
{\displaystyle f(-z)=-f(z)}
), the unique analytic continuation
F
(
z
)
{\displaystyle F(z)}
satisfies:
F
(
z
)
=
F
(
−
1
−
z
)
.
{\displaystyle F(z)=F(-1-z).}
This represents a point symmetry about
z
=
−
1
/
2
{\displaystyle z=-1/2}
.
If
f
(
z
)
{\displaystyle f(z)}
is an even function (
f
(
−
z
)
=
f
(
z
)
{\displaystyle f(-z)=f(z)}
), the unique analytic continuation
F
(
z
)
{\displaystyle F(z)}
satisfies:
F
(
z
)
+
F
(
−
1
−
z
)
=
F
(
−
1
)
{\displaystyle F(z)+F(-1-z)=F(-1)}
.
List of indefinite sums [ edit ]
Antidifferences of rational functions [ edit ]
For positive integer exponents, Faulhaber's formula can be used. Note that
x
{\displaystyle x}
in the result of Faulhaber's formula must be replaced with
x
−
1
{\displaystyle x-1}
due to the offset, as Faulhaber's formula finds
∇
−
1
{\displaystyle \nabla ^{-1}}
rather than
Δ
−
1
{\displaystyle \Delta ^{-1}}
.
For negative integer exponents, the indefinite sum is closely related to the polygamma function :
∑
x
1
x
a
=
(
−
1
)
a
−
1
ψ
(
a
−
1
)
(
x
)
(
a
−
1
)
!
+
C
,
a
∈
N
{\displaystyle \sum _{x}{\frac {1}{x^{a}}}={\frac {(-1)^{a-1}\psi ^{(a-1)}(x)}{(a-1)!}}+C,\,a\in \mathbb {N} }
For fractions not listed in this section, one may use the polygamma function with partial fraction decomposition . More generally,
∑
x
x
a
=
{
B
a
+
1
(
x
)
a
+
1
+
C
,
if
a
≠
−
1
ψ
(
x
)
+
C
,
if
a
=
−
1
=
{
−
ζ
(
−
a
,
x
)
+
C
,
if
a
≠
−
1
ψ
(
x
)
+
C
,
if
a
=
−
1
{\displaystyle \sum _{x}x^{a}={\begin{cases}{\frac {B_{a+1}(x)}{a+1}}+C,&{\text{if }}a\neq -1\\\psi (x)+C,&{\text{if }}a=-1\end{cases}}={\begin{cases}-\zeta (-a,x)+C,&{\text{if }}a\neq -1\\\psi (x)+C,&{\text{if }}a=-1\end{cases}}}
where
B
a
(
x
)
{\displaystyle B_{a}(x)}
are the Bernoulli polynomials ,
ζ
(
s
,
a
)
{\displaystyle \zeta (s,a)}
is the Hurwitz zeta function , and
ψ
(
z
)
{\displaystyle \psi (z)}
is the digamma function . This is related to the generalized harmonic numbers .
As the generalized harmonic numbers use reciprocal powers,
a
{\displaystyle a}
must be substituted for
−
a
{\displaystyle -a}
, and the most common form uses the inverse of the backward difference offset:
∇
−
1
x
a
=
H
x
(
−
a
)
=
ζ
(
−
a
)
−
ζ
(
−
a
,
x
+
1
)
.
{\displaystyle \nabla ^{-1}x^{a}={H_{x}^{(-a)}}=\zeta (-a)-\zeta (-a,x+1).}
Here,
ζ
(
−
a
)
{\displaystyle \zeta (-a)}
is the constant
C
{\displaystyle C}
.
The Bernoulli polynomials are also related via a partial derivative with respect to
x
{\displaystyle x}
:
∂
∂
x
(
∑
x
x
a
)
=
B
a
(
x
)
=
−
a
ζ
(
1
−
a
,
x
)
.
{\displaystyle {\frac {\partial }{\partial x}}\left(\sum _{x}x^{a}\right)=B_{a}(x)=-a\zeta (1-a,x).}
Similarly, using the inverse of the backwards difference operator may be considered more natural, as:
∂
∂
x
(
∇
−
1
x
a
)
|
x
=
0
=
−
a
ζ
(
1
−
a
,
x
+
1
)
|
x
=
0
=
−
a
ζ
(
1
−
a
)
=
B
a
.
{\displaystyle {\frac {\partial }{\partial x}}\left(\nabla ^{-1}x^{a}\right){\bigg |}_{x=0}=-a\zeta (1-a,x+1){\bigg |}_{x=0}=-a\zeta (1-a)=B_{a}.}
Further generalization comes from use of the Lerch transcendent :
∑
x
z
x
(
x
+
a
)
s
=
−
z
x
Φ
(
z
,
s
,
x
+
a
)
+
C
,
{\displaystyle \sum _{x}{\frac {z^{x}}{(x+a)^{s}}}=-z^{x}\,\Phi (z,s,x+a)+C,}
which generalizes the generalized harmonic numbers as
z
Φ
(
z
,
s
,
a
+
1
)
−
z
x
+
1
Φ
(
z
,
s
,
x
+
1
+
a
)
{\displaystyle z\Phi \left(z,s,a+1\right)-z^{x+1}\Phi \left(z,s,x+1+a\right)}
when taking
∇
−
1
{\displaystyle \nabla ^{-1}}
. Additionally, the partial derivative is given by
∂
∂
x
(
−
z
x
Φ
(
z
,
s
,
x
+
a
)
)
=
z
x
(
s
Φ
(
z
,
s
+
1
,
x
+
a
)
−
ln
(
z
)
Φ
(
z
,
s
,
x
+
a
)
)
.
{\displaystyle {\frac {\partial }{\partial x}}\left(-z^{x}\Phi \left(z,s,x+a\right)\right)=z^{x}\left(s\Phi \left(z,s+1,x+a\right)-\ln(z)\Phi \left(z,s,x+a\right)\right).}
∑
x
B
a
(
x
)
=
(
x
−
1
)
B
a
(
x
)
−
a
a
+
1
B
a
+
1
(
x
)
+
C
{\displaystyle \sum _{x}B_{a}(x)=(x-1)B_{a}(x)-{\frac {a}{a+1}}B_{a+1}(x)+C}
Antidifferences of exponential functions [ edit ]
∑
x
a
x
=
a
x
a
−
1
+
C
{\displaystyle \sum _{x}a^{x}={\frac {a^{x}}{a-1}}+C}
Antidifferences of logarithmic functions [ edit ]
∑
x
log
b
x
=
log
b
Γ
(
x
)
+
C
{\displaystyle \sum _{x}\log _{b}x=\log _{b}\Gamma (x)+C}
∑
x
log
b
a
x
=
log
b
(
a
x
−
1
Γ
(
x
)
)
+
C
{\displaystyle \sum _{x}\log _{b}ax=\log _{b}(a^{x-1}\Gamma (x))+C}
Antidifferences of hyperbolic functions [ edit ]
∑
x
sinh
a
x
=
1
2
csch
(
a
2
)
cosh
(
a
2
−
a
x
)
+
C
{\displaystyle \sum _{x}\sinh ax={\frac {1}{2}}\operatorname {csch} \left({\frac {a}{2}}\right)\cosh \left({\frac {a}{2}}-ax\right)+C}
∑
x
cosh
a
x
=
1
2
csch
(
a
2
)
sinh
(
a
x
−
a
2
)
+
C
{\displaystyle \sum _{x}\cosh ax={\frac {1}{2}}\operatorname {csch} \left({\frac {a}{2}}\right)\sinh \left(ax-{\frac {a}{2}}\right)+C}
∑
x
tanh
a
x
=
1
a
ψ
e
a
(
x
−
i
π
2
a
)
+
1
a
ψ
e
a
(
x
+
i
π
2
a
)
−
x
+
C
{\displaystyle \sum _{x}\tanh ax={\frac {1}{a}}\psi _{e^{a}}\left(x-{\frac {i\pi }{2a}}\right)+{\frac {1}{a}}\psi _{e^{a}}\left(x+{\frac {i\pi }{2a}}\right)-x+C}
where
ψ
q
(
x
)
{\displaystyle \psi _{q}(x)}
is the q-digamma function.
Antidifferences of trigonometric functions [ edit ]
∑
x
sin
a
x
=
−
1
2
csc
(
a
2
)
cos
(
a
2
−
a
x
)
+
C
,
a
≠
2
n
π
{\displaystyle \sum _{x}\sin ax=-{\frac {1}{2}}\csc \left({\frac {a}{2}}\right)\cos \left({\frac {a}{2}}-ax\right)+C\,,\,\,a\neq 2n\pi }
∑
x
cos
a
x
=
1
2
csc
(
a
2
)
sin
(
a
x
−
a
2
)
+
C
,
a
≠
2
n
π
{\displaystyle \sum _{x}\cos ax={\frac {1}{2}}\csc \left({\frac {a}{2}}\right)\sin \left(ax-{\frac {a}{2}}\right)+C\,,\,\,a\neq 2n\pi }
∑
x
sin
2
a
x
=
x
2
+
1
4
csc
(
a
)
sin
(
a
−
2
a
x
)
+
C
,
a
≠
n
π
{\displaystyle \sum _{x}\sin ^{2}ax={\frac {x}{2}}+{\frac {1}{4}}\csc(a)\sin(a-2ax)+C\,\,,\,\,a\neq n\pi }
∑
x
cos
2
a
x
=
x
2
−
1
4
csc
(
a
)
sin
(
a
−
2
a
x
)
+
C
,
a
≠
n
π
{\displaystyle \sum _{x}\cos ^{2}ax={\frac {x}{2}}-{\frac {1}{4}}\csc(a)\sin(a-2ax)+C\,\,,\,\,a\neq n\pi }
∑
x
tan
a
x
=
i
x
−
1
a
ψ
e
2
i
a
(
x
−
π
2
a
)
+
C
,
a
≠
n
π
2
{\displaystyle \sum _{x}\tan ax=ix-{\frac {1}{a}}\psi _{e^{2ia}}\left(x-{\frac {\pi }{2a}}\right)+C\,,\,\,a\neq {\frac {n\pi }{2}}}
where
ψ
q
(
x
)
{\displaystyle \psi _{q}(x)}
is the q-digamma function.
∑
x
tan
x
=
i
x
−
ψ
e
2
i
(
x
+
π
2
)
+
C
=
−
∑
k
=
1
∞
(
ψ
(
k
π
−
π
2
+
1
−
x
)
+
ψ
(
k
π
−
π
2
+
x
)
−
ψ
(
k
π
−
π
2
+
1
)
−
ψ
(
k
π
−
π
2
)
)
+
C
{\displaystyle {\begin{aligned}\sum _{x}\tan x&=ix-\psi _{e^{2i}}\left(x+{\frac {\pi }{2}}\right)+C\\&=-\sum _{k=1}^{\infty }\left(\psi \left(k\pi -{\frac {\pi }{2}}+1-x\right)+\psi \left(k\pi -{\frac {\pi }{2}}+x\right)\right.\\&\quad \left.-\psi \left(k\pi -{\frac {\pi }{2}}+1\right)-\psi \left(k\pi -{\frac {\pi }{2}}\right)\right)+C\end{aligned}}}
∑
x
cot
a
x
=
−
i
x
−
i
ψ
e
2
i
a
(
x
)
a
+
C
,
a
≠
n
π
2
{\displaystyle \sum _{x}\cot ax=-ix-{\frac {i\psi _{e^{2ia}}(x)}{a}}+C\,,\,\,a\neq {\frac {n\pi }{2}}}
∑
x
sinc
x
=
sinc
(
x
−
1
)
(
1
2
+
(
x
−
1
)
×
(
ln
(
2
)
+
ψ
(
x
−
1
2
)
+
ψ
(
1
−
x
2
)
2
−
ψ
(
x
−
1
)
+
ψ
(
1
−
x
)
2
)
)
+
C
{\displaystyle {\begin{aligned}\sum _{x}\operatorname {sinc} x&=\operatorname {sinc} (x-1)\left({\frac {1}{2}}+(x-1)\right.\\&\quad \left.\times \left(\ln(2)+{\frac {\psi ({\frac {x-1}{2}})+\psi ({\frac {1-x}{2}})}{2}}\right.\right.\\&\quad \quad \left.\left.-{\frac {\psi (x-1)+\psi (1-x)}{2}}\right)\right)+C\end{aligned}}}
where
sinc
(
x
)
{\displaystyle \operatorname {sinc} (x)}
is the normalized sinc function .
Antidifferences of special functions [ edit ]
∑
x
ψ
(
x
)
=
(
x
−
1
)
ψ
(
x
)
−
x
+
C
{\displaystyle \sum _{x}\psi (x)=(x-1)\psi (x)-x+C}
∑
x
Γ
(
x
)
=
(
−
1
)
x
+
1
Γ
(
x
)
Γ
(
1
−
x
,
−
1
)
e
+
C
{\displaystyle \sum _{x}\Gamma (x)=(-1)^{x+1}\Gamma (x){\frac {\Gamma (1-x,-1)}{e}}+C}
where
Γ
(
s
,
x
)
{\displaystyle \Gamma (s,x)}
is the incomplete gamma function .
∑
x
(
x
)
a
=
(
x
)
a
+
1
a
+
1
+
C
{\displaystyle \sum _{x}(x)_{a}={\frac {(x)_{a+1}}{a+1}}+C}
where
(
x
)
a
{\displaystyle (x)_{a}}
is the falling factorial .
∑
x
sexp
a
(
x
)
=
ln
a
(
sexp
a
(
x
)
)
′
(
ln
a
)
x
+
C
{\displaystyle \sum _{x}\operatorname {sexp} _{a}(x)=\ln _{a}{\frac {(\operatorname {sexp} _{a}(x))'}{(\ln a)^{x}}}+C}
(see super-exponential function )
^ Man, Yiu-Kwong (1993), "On computing closed forms for indefinite summations", Journal of Symbolic Computation , 16 (4): 355– 376, doi :10.1006/jsco.1993.1053 , MR 1263873
^ Goldberg, Samuel (1958), Introduction to difference equations, with illustrative examples from economics, psychology, and sociology , Wiley, New York, and Chapman & Hall, London, p. 41, ISBN 978-0-486-65084-5 , MR 0094249 , If
Y
{\displaystyle Y}
is a function whose first difference is the function
y
{\displaystyle y}
, then
Y
{\displaystyle Y}
is called an indefinite sum of
y
{\displaystyle y}
and denoted by
Δ
−
1
y
{\displaystyle \Delta ^{-1}y}
; reprinted by Dover Books, 1986
^ Kelley, Walter G.; Peterson, Allan C. (2001). Difference Equations: An Introduction with Applications . Academic Press. p. 20. ISBN 0-12-403330-X .
^ "Handbook of discrete and combinatorial mathematics", Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1
^ a b Markus Müller and Dierk Schleicher, How to Add a Noninteger Number of Terms: From Axioms to New Identities , Amer. Math. Mon. 118(2), 136-152 (2011).
^ a b Candelpergher, Bernard (2017). "Ramanujan Summation of Divergent Series" (PDF) . HAL Archives Ouvertes . p. 3. Retrieved 2025-12-07 .
^ a b Candelpergher, Bernard (2017). "Ramanujan Summation of Divergent Series" (PDF) . HAL Archives Ouvertes . p. 23. Retrieved 2025-12-07 .
^ Algorithms for Nonlinear Higher Order Difference Equations , Manuel Kauers
^ a b c "§2.10 Sums and Sequences" . NIST Digital Library of Mathematical Functions . National Institute of Standards and Technology. Retrieved 2025-11-20 .
^ a b Olver, Frank W. J. (1997). Asymptotics and Special Functions . A K Peters Ltd. p. 290. ISBN 978-1-56881-069-0 .
^ Bernoulli numbers of the second kind on Mathworld
^ Ferraro, Giovanni (2008). The Rise and Development of the Theory of Series up to the Early 1820s . Springer Science+Business Media, LLC. p. 248. ISBN 978-0-387-73468-2 .
^ Bruce C. Berndt, Ramanujan's Notebooks Archived 2006-10-12 at the Wayback Machine , Ramanujan's Theory of Divergent Series , Chapter 6, Springer-Verlag (ed.), (1939), pp. 133–149.
^ Éric Delabaere, Ramanujan's Summation , Algorithms Seminar 2001–2002 , F. Chyzak (ed.), INRIA, (2003), pp. 83–88.
^ Kelley, Walter G.; Peterson, Allan C. (2001). Difference Equations: An Introduction with Applications . Academic Press. p. 24. ISBN 0-12-403330-X .
"Difference Equations: An Introduction with Applications", Walter G. Kelley, Allan C. Peterson, Academic Press, 2001, ISBN 0-12-403330-X
Markus Müller. How to Add a Non-Integer Number of Terms, and How to Produce Unusual Infinite Summations
Markus Mueller, Dierk Schleicher. Fractional Sums and Euler-like Identities
S. P. Polyakov. Indefinite summation of rational functions with additional minimization of the summable part. Programmirovanie, 2008, Vol. 34, No. 2.
"Finite-Difference Equations And Simulations", Francis B. Hildebrand, Prenctice-Hall, 1968