1. Introduction
Convexity is a fundamental concept in mathematics, particularly in the field of optimization and analysis. The history of convexity dates back to ancient times, with early mathematical investigations conducted by ancient Greek mathematicians such as Euclid and Archimedes. However, the formal development of convexity theory began to take shape in the 17th and 18th centuries with the works of mathematicians like Isaac Newton and Leonhard Euler.
In the 19th century, the concept of convexity saw significant advancements, particularly with the emergence of convex analysis. Augustin-Louis Cauchy, Jean-Victor Poncelet, and Joseph Fourier made notable contributions to the field during this period. The mid-20th century witnessed further progress in convex optimization and convex geometry, with mathematicians like Hermann Minkowski, George Dantzig, and John von Neumann playing pivotal roles in advancing the theory and applications of convexity.
Today, convexity theory is a cornerstone of mathematics, with widespread applications in various fields, including economics, engineering, computer science, and physics. Its importance stems from its elegance, versatility, and utility in modeling and solving a wide range of optimization problems.
The Hermite-Hadamard inequality, also known as the Hermite-Hadamard integral inequality, is a fundamental result in mathematical analysis that provides bounds on the value of certain types of integrals. It is named after French mathematician Charles Hermite and French mathematician Jules Henri Poincare (though it is often associated with the work of French mathematician Jacques Hadamard as well). Charles Hermite (1822-1901) was a prominent French mathematician known for his work in number theory, algebra, and mathematical analysis. He made significant contributions to the theory of elliptic functions, algebraic number theory, and mathematical physics. Jules Henri Poincare (1854-1912) was a leading mathematician and physicist who made substantial contributions to various fields of mathematics, including topology, celestial mechanics, and the theory of dynamical systems, to know more one can be seen [1,2]. Mathematically it is defined as:
If
be a convex function on the interval I on the real numbers
The inequality
is known as Hermite-Hadamard inequality for convex function.
Breckner was the pioneer mathematician to introduce an s-convex function in 1979 [3], and the exploration of connections with s-convexity in its initial sense was extensively discussed in [4]. The direct proof of Breckner's seminal result was later acclaimed in 2001 by Pycia [5]. Given the pivotal role of convexity and s -convexity in unraveling optimality within mathematical programming, numerous researchers have dedicated substantial attention to s-convex functions. Notably, earlier works by H. Hudzik, et al. [4] elucidated two variants of s-convexity
, demonstrating that the second sense inherently surpasses the s-convexity in the initial sense whenever
. We broadly term the use of s-convexity in its second sense as an s-convex function. Given
, this class of functions holds greater importance than convex functions. Moreover, our primary findings reveal that results obtained via s-convexity significantly outperform those derived from convexity. Additionally, s-convexity serves as a generalization of convex functions, allowing us to deduce results for convex functions by setting s=1 in the s-convex function outcomes. The Hadamard inequality for s-convex functions in its initial sense was introduced by S. S. Dragomir and Fitzpatrick [6]. Hadamard inequality for s-convex functions in the second sense was also introduced by S. S. Dragomir and Fitzpatrick in [6]. The class of h-convex functions was introduced by S. Varošanec in [7], which generalizes the concept of Convex Functions, s-Convex Functions, Godunova-Levin Functions, and p-Functions. Hadamard's type inequality for h-convex functions was introduced by Sarilaya, et al. in [8].
The primary objective and rationale behind coordinated convex functions lie in the fact that every convex mapping
retains convexity when viewed along its coordinates. In other words, the function remains convex when examined individually along each coordinate axis. However, it's noteworthy that while coordinated convex functions exhibit this property, there also exist coordinated convex functions that are not globally convex. This highlights the nuanced relationship between coordinated convexity and global convexity, offering insights into the intricate nature of these mathematical structures (see for example [9,10]). For more results in the field of co-ordinated convex, we refer the interested readers to see [9-20].
S.S. Dragomir established the ensuing Hermite-Hadamard type inequalities for coordinated convex functions defined on the plane's rectangle in [9].
Theorem 1
[9] Suppose that a function
is convex on co-ordinates. Then one has the inequalities:
The Hadamard-type inequality for s-convex functions in the second sense, defined on the coordinates of a rectangle in the plane
, was established by Alomari and M. Darus in [21].
Alomari and M. Darus established similar inequalities of Hadamard's type for s-convex functions defined on the coordinates in the first sense on a rectangle in the plane
[11] Hadamard inequality for h-convex functions defined on the coordinates of a rectangle in the plane
was introduced by Amer Latif and M. W. Alomari in [22].
The Hermite-Hadamard Inequality spans a wide range of operator convex functions, giving rise to numerous intriguing inequalities within the dynamic field of matrix analysis. A natural progression from the classical Hermite-Hadamard Inequality to Hermitian matrices could entail a double inequality. This extension aims to capture the interplay between the inherent properties of Hermitian matrices and the principles underlying the Hermite-Hadamard Inequality, potentially yielding new insights and applications in matrix analysis.
The Hermite–Hadamard-Mercer type inequality is an extension of the classical Hermite–Hadamard inequality. It provides a relationship between the average value of a function over an interval and the function's integral. Specifically, it states that if a function is convex (or satisfies certain convexity conditions) on a given interval, then the average value of the function over that interval is greater than or equal to the value of the function at the midpoint of the interval. This inequality has various applications in mathematical analysis, optimization, and related fields. The literature regarding Hermite–Hadamard-Mercer inequality is as follows:
In 2003, Mercer authored a paper discussing a modification of Jensen's Inequality [23]. In 2006, Pe arić and their colleagues introduced a concept of Mercer-type Jensen inequality for operator convex functions, accompanied by various applications [24]. Niezgoda in 2009, worked on the generalization of Mercer's result on the convex functions [25]. Hermite–Hadamard-Mercer-type inequality was introduced by Kian, et al. in 2013 [26]. Hermite–Hadamard-Mercer Inequality for -convex functions are given by Xu, et al. in [27].
In mathematics, there's a crucial link between the value of a convex function evaluated at an integral and the integral of a convex function itself. This connection is formally termed Jensen's inequality, named after the Danish mathematician Johan Jensen, who formulated it in 1906. Jensen's inequality for convex functions holds a prominent place among the most celebrated inequalities in both mathematical and statistical realms. It serves as a foundational principle from which numerous other notable inequalities stem. Notably, Hölder's inequality and Minkowski's inequality emerge as special instances of Jensen's inequality for convex functions, highlighting its extensive applicability. Over time, a multitude of variations, refinements, and generalizations of Jensen's inequalities have been developed and extensively investigated, underscoring its enduring importance and versatility across diverse mathematical contexts. For more detail, one can be seen [28-32].
The outline of the article is structured as follows: In Section 2, we provide fundamental definitions and preliminary concepts to establish a solid foundation for our subsequent discussions. In Section 3, we demonstrate the Jensen-Mercer inequality and Hermite–Haamard–Mercer type inequalities for coordinated h-convex functions, contributing to the theoretical framework of our research. Section 4 presents numerical examples and computational analyses, offering empirical validation of the newly derived results and their practical significance. Finally, in Section 5, we present our conclusions, summarizing the key findings and implications of our study.
2. Preliminaries
In this section, we present some basic definitions and results which are required to establish the ongoing article.
Definition 1
[33] Let I be a convex subset of a real vector space and
is said to be convex if
Definition 2
[4] A function
is said to be s-convex if
Definition 3
Let
be a positive function. A function
is said to be h-convex or that F is said to belong to the class
, if F is non-negative and for all
and
, we have
Definition 4
[9] A mapping
is convex on co-ordinates, if the following inequality holds:
Dragomir introduced a modification for convex functions on coordinates, referred to as coordinated convex functions [9,10] as follows: A function
is convex on the co-ordinates on Δ if the partial differentiable mappings
and
are convex for all
A formal definition of coordinated convex functions can be expressed as follows:
Definition 5
[20] A mapping
is convex on co-ordinates, if the following inequality holds:
Definition 6
A mapping
is h-convex on co-ordinates, if the following inequality holds:
Definition 7
[7] Let F be h-convex function defined on the real interval
If
and
also
be a supermultiplicative function then
where
Definition 8
[34] Let
be a supermultiplicative function Let
and
such that
If F be h-convex function defined on the real interval
then for any finite positive increasing sequence
we have
Definition 9
[35] This definition provided by Stromer, describes a function
as a super-multiplicative if it satisfies the inequality
for all
Conversely, if the inequality is reversed, the function g is termed as sub-multiplicative.
Example 1
Consider the function
where
If
then for
the function g is super-multiplicative, it means the function g satisfies the inequality (2.9), on the other hand, p>1 the function g is sub-multiplicative.
3. Main results
In this section, we introduce Jensen-Mercer inequality and several Hermite–Hadamard-Mercer type inequalities tailored for coordinated h-convex functions.
Lemma 1
Let
is a non-negative supper multiplicative function, and
such that
and
Then
is any co-ordinated h-convex function, and finite positive increasing sequences
and
Then,
where
Proof. Assume that
and
also
such that
and
Let us write
Then
so that the pairs
and
possess the same midpoint. Since that is the case there exists
such that
Similarly, it can be written as
Then
so that the pairs
and
share the same midpoint. Since that is the case there exists
such that
By Jensen inequality for co-ordinated h-convex functions and using (3.3)
where
and
for
.
Hence, we have
The proof of Lemma 1 is completed.
Theorem 2
Let
and
,
such that
and
. Also, assume that
is a non-negative supper multiplicative function with
and
respectively. If the function
is co-ordinated h-convex on
then for any finite positive sequences
and
from
Then
Proof. Since
and
we have:
Hence the proof of Theorem 2 is completed.
Theorem 3
Suppose that
be an h-convex function on co-ordinates on Δ and let
and
. Then one has the inequalities:
where
and
Proof. Since
is h-convex function on co-ordinates also
is h-convex on
for all
Then by Hermite–Hadamard-Mercer Inequality for h-convex functions
i.e.
Integrating the above inequality with respect to
, we get
Similarly, for the mapping
we get
By adding the inequalities (3.6) and (3.7), we get
Now by Hadamard's Mercer inequality for h-convex functions, we have:
Adding these inequalities, we have:
By dividing both sides of the above inequality
we get
Finally, by the inequality (3.8), we get:
Adding the above inequalities, we get
From (3.6)-(3.10), and by using Jensen Mercer inequality for co-ordinated h-convex functions, we get (3.5). The proof of Theorem 3 is completed.
Corollary 1
If we replace
in Theorem 3, we get the inequalities:
Remark 1
If we replace
in Theorem 3, we get the inequalities,
which are already proved by Toseef, et al.
Remark 2
If we replace
, and
and
in Theorem 3, we get the inequalities,
which are already proved by Dragomir in [8].
Remark 3
If we replace
, and
and
in Theorem 3, we get the inequality:
Remark 4
If we replace
and
in Theorem 3, we get the inequalities,
which are already proved by Alomari and Latif in [21].
4. Numerical examples and computational analysis
In this section, we give numerical examples and computational analysis of newly derived inequalities.
Example 2
is h-the convex function on
then there are three cases
In the first case:
,
(4.1)
(4.3)
In the second case:
(4.3)
(4.5)
In the third case:
From inequalities (4.1)-(4.6), Table 1, and Graph we can conclude that inequalities of Theorem 3 are better when
and worse when
Table 2: Comparative analysis of Inequalities of Theorem 3 for different values of 's' of Example 2. |
s |
Left Inequality |
Middle Inequality |
Right Inequality |
0.1 |
0.5800 |
2.0352 |
16.0273 |
0.2 |
1.3831 |
4.1549 |
24.3931 |
0.3 |
3.2534 |
8.5082 |
41.4191 |
0.4 |
7.6526 |
17.4753 |
76.3889 |
0.5 |
18.000 |
36.000 |
149.333 |
0.6 |
42.3388 |
74.3783 |
303.732 |
0.7 |
99.5877 |
154.111 |
634.494 |
0.8 |
234.246 |
320.212 |
1349.81 |
0.9 |
550.983 |
667.166 |
2908.3 |
1.0 |
1296.0 |
1393.778 |
6324.0 |
Example 3
is h-the convex function on
then there are three cases
In the first case:
,
(4.7)
(4.9)
In the second case:
(4.10)
(4.12)
In the third case:
From inequalities (4.17)-(4.12), we can conclude that inequalities of Theorem 3 are better when
and worse when
Remark 5
Clearly, in Example 2, from Table 1 and Figure 1 our newly established inequalities give better results when
Figure 1: Comparative Analysis of Inequalities of Theorem 3 when 's' lies between 0 and 1.
Remark 6
Clearly, in Example 3, from Table 2 and Figure 2our newly established inequalities give better results when
Table 2: Comparative analysis of Inequalities of Theorem 3 for different values of 's' of Example 3. |
s |
Left Inequality |
Middle Inequality |
Right Inequality |
0.1 |
38.5646 |
269.08 |
72742 |
0.2 |
594892 |
223657 |
1.34767´109 |
0.3 |
917671 |
3.20339´108 |
2.5293´1013 |
0.4 |
1.41558´108 |
5.8682´1011 |
4.8038´1017 |
0.5 |
2.1837´1010 |
1.2290´1015 |
9.21733´1021 |
0.6 |
3.3685´1012 |
2.7992´1018 |
1.7844´1026 |
0.7 |
5.1962´1014 |
6.7541´1021 |
3.4861´1030 |
0.8 |
8.0155´1016 |
1.6997´1025 |
6.8403´1034 |
0.9 |
1.2365´1019 |
4.4163´1028 |
1.3523´1039 |
1.0 |
1.1524´1021 |
5.3422´1031 |
9.9887´1042 |
Figure 2: Comparative Analysis of Inequalities of Theorem 3 when 's' lies between 0 and 1.
5. Applications
Hermite-Hadamard–Mercer inequality has several applications in mathematical analysis and optimization. Here are some potential areas of application:
1. Mathematical analysis
Hermite-Hadamard–Mercer inequality provides tighter bounds and more precise estimates, enhancing theoretical understanding and practical applications. General convex functions offer insights into broader applications where traditional convexity concepts are insufficient.
2. Optimization
Improved inequalities can enhance the performance and accuracy of optimization algorithms, especially those that rely on convexity assumptions. In operations research, refined inequalities can lead to more efficient solutions for resource allocation problems by providing better bounds and estimates.
3. Computational analysis
Computational analysis of these inequalities can help in developing numerical methods that are more efficient and accurate. Refined inequalities can improve the fidelity of simulations and models that involve convex functions or require precise bounds for their operations.
4. Economics and finance
In financial mathematics, tighter inequalities can improve risk assessments and pricing models by providing more accurate estimates. Enhanced convex function analysis can refine economic models, leading to better predictions and insights.
5. Engineering
In control theory, refined inequalities can lead to more precise control algorithms, improving system stability and performance. Better bounds and estimates can improve signal processing techniques, leading to clearer and more accurate results.
6. Conclusion
In this article, we demonstrated the Jensen-Mercer inequality for coordinated h-convex functions and introduced the novel Hermite–Hadamard-Mercer type inequalities tailored for coordinated h-convex functions, leveraging a newly discovered inequality. We provided numerical examples and conducted computational analyses of the derived inequalities, showcasing their superior estimation capabilities compared to previously established results. This work significantly contributes to the evolution of mathematical theory and its real-world applications by effectively connecting theoretical concepts with practical problem-solving methodologies. It represents a fresh trajectory in the realm of inequalities, offering valuable insights for researchers immersed in this domain.
Authors contributions
Conceptualization, Methodology, Validation, Investigation, Writing-original Draft preparation: All these were done by M. Toseef. Writing-review and editing, M. Toseef, A. Mateen, H. Budak, A. Kashuri, Visualization: M. Toseef, A. Mateen, H. Budak, A. Kashuri, Supervision: Z. Zhang, Project administration: Z. Zhang and H. Budak. All authors have read and agreed to the final version of the manuscript.