Dataset Viewer
Auto-converted to Parquet Duplicate
title
stringlengths
1
124
text
stringlengths
9
3.55k
source
stringclasses
1 value
design management
In market and brand focused companies, design management focuses mainly on brand design management, including corporate brand management and product brand management. Focusing on the brand as the core for design decisions results in a strong focus on the brand experience, customer touch points, reliability, recognition, and trust relations. The design is driven by the brand vision and strategy. Corporate brand design managementMarket and brand focused organizations are concerned with the expression and perception of the corporate brand.
wikipedia
design management
Corporate design management implements, develops, and maintains the corporate identity, or brand. This type of brand management is strongly anchored in the organization to control and influence corporate design activities. The design program plays the role of a quality program within many fields of the organization to achieve uniform internal branding.
wikipedia
design management
It is strongly linked to strategy, corporate culture, product development, marketing, organizational structure, and technological development. Achieving a consistent corporate brand requires the involvement of designers and a widespread design awareness among employees. A creative culture, knowledge sharing processes, determination, design leadership, and good work relations support the work of corporate brand management.
wikipedia
design management
Product brand design managementThe main focus of product brand management lies on the single product or product family. Product design management is linked to research and development, marketing, and brand management, and is present in the fast-moving consumer goods (FMCG) industry. It is responsible for the visual expressions of the individual product brand, with its diverse customer–brand touch points and the execution of the brand through design.
wikipedia
marketing engineering
In marketing engineering methods and models can be classified in several categories:
wikipedia
observational techniques
In marketing research, the most frequently used types of observational techniques are: Personal observation observing products in use to detect usage patterns and problems observing license plates in store parking lots determining the socio-economic status of shoppers determining the level of package scrutiny determining the time it takes to make a purchase decision Mechanical observationeye-tracking analysis while subjects watch advertisements oculometers – what the subject is looking at pupilometers – how interested is the viewer electronic checkout scanners – records purchase behaviour on-site cameras in stores people meters (as in monitoring television viewing) e.g.Nielsen box voice pitch meters – measures emotional reactions psychogalvanometer – measures galvanic skin response Auditsretail audits to determine the quality of service in stores inventory audits to determine product acceptance shelf space audits scanner based audits Trace Analysiscredit card records computer cookie records garbology – looking for traces of purchase patterns in garbage detecting store traffic patterns by observing the wear in the floor (long term) or the dirt on the floor (short term) exposure to advertisements Content analysisobserve the content of magazines, television broadcasts, radio broadcasts, or newspapers, either articles, programs, or advertisements
wikipedia
product model
In marketing, a product is an object, or system, or service made available for consumer use as of the consumer demand; it is anything that can be offered to a market to satisfy the desire or need of a customer. In retailing, products are often referred to as merchandise, and in manufacturing, products are bought as raw materials and then sold as finished goods. A service is also regarded as a type of product.
wikipedia
product model
In project management, products are the formal definition of the project deliverables that make up or contribute to delivering the objectives of the project. A related concept is that of a sub-product, a secondary but useful result of a production process. Dangerous products, particularly physical ones, that cause injuries to consumers or bystanders may be subject to product liability.
wikipedia
brand development
In marketing, brand management begins with an analysis on how a brand is currently perceived in the market, proceeds to planning how the brand should be perceived if it is to achieve its objectives and continues with ensuring that the brand is perceived as planned and secures its objectives. Developing a good relationship with target markets is essential for brand management. Tangible elements of brand management include the product itself; its look, price, and packaging, etc. The intangible elements are the experiences that the target markets share with the brand, and also the relationships they have with the brand. A brand manager would oversee all aspects of the consumer's brand association as well as relationships with members of the supply chain.
wikipedia
linear discriminant analysis
In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps: Formulate the problem and gather data—Identify the salient attributes consumers use to evaluate products in this category—Use quantitative marketing research techniques (such as surveys) to collect data from a sample of potential customers concerning their ratings of all the product attributes. The data collection stage is usually done by marketing research professionals.
wikipedia
linear discriminant analysis
Survey questions ask the respondent to rate a product from one to five (or 1 to 7, or 1 to 10) on a range of attributes chosen by the researcher. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size.
wikipedia
linear discriminant analysis
The attributes chosen will vary depending on the product being studied. The same question is asked about all the products in the study. The data for multiple products is codified and input into a statistical program such as R, SPSS or SAS.
wikipedia
linear discriminant analysis
(This step is the same as in Factor analysis). Estimate the Discriminant Function Coefficients and determine the statistical significance and validity—Choose the appropriate discriminant analysis method. The direct method involves estimating the discriminant function so that all the predictors are assessed simultaneously.
wikipedia
linear discriminant analysis
The stepwise method enters the predictors sequentially. The two-group method should be used when the dependent variable has two categories or states. The multiple discriminant method is used when the dependent variable has three or more categorical states.
wikipedia
linear discriminant analysis
Use Wilks's Lambda to test for significance in SPSS or F stat in SAS. The most common method used to test validity is to split the sample into an estimation or analysis sample, and a validation or holdout sample. The estimation sample is used in constructing the discriminant function.
wikipedia
linear discriminant analysis
The validation sample is used to construct a classification matrix which contains the number of correctly classified and incorrectly classified cases. The percentage of correctly classified cases is called the hit ratio. Plot the results on a two dimensional map, define the dimensions, and interpret the results.
wikipedia
linear discriminant analysis
The statistical program (or a related module) will map the results. The map will plot each product (usually in two-dimensional space).
wikipedia
linear discriminant analysis
The distance of products to each other indicate either how different they are. The dimensions must be labelled by the researcher. This requires subjective judgement and is often very challenging. See perceptual mapping.
wikipedia
bundled software
In marketing, product bundling is offering several products or services for sale as one combined product or service package. It is a common feature in many imperfectly competitive product and service markets. Industries engaged in the practice include telecommunications services, financial services, health care, information, and consumer electronics.
wikipedia
bundled software
A software bundle might include a word processor, spreadsheet, and presentation program into a single office suite. The cable television industry often bundles many TV and movie channels into a single tier or package. The fast food industry combines separate food items into a "meal deal" or "value meal".
wikipedia
bundled software
A bundle of products may be called a package deal; in recorded music or video games, a compilation or box set; or in publishing, an anthology. Most firms are multi-product or multi-service companies faced with the decision whether to sell products or services separately at individual prices or whether combinations of products should be marketed in the form of "bundles" for which a "bundle price" is asked. Price bundling plays an increasingly important role in many industries (e.g. banking, insurance, software, automotive) and some companies even build their business strategies on bundling. In bundle pricing, companies sell a package or set of goods or services for a lower price than they would charge if the customer bought all of them separately. Pursuing a bundle pricing strategy allows a business to increase its profit by using a discount to induce customers to buy more than they otherwise would have.
wikipedia
émery topology
In martingale theory, Émery topology is a topology on the space of semimartingales. The topology is used in financial mathematics. The class of stochastic integrals with general predictable integrands coincides with the closure of the set of all simple integrals.The topology was introduced in 1979 by the french mathematician Michel Émery.
wikipedia
computer media
In mass communication, digital media is any communication media that operate in conjunction with various encoded machine-readable data formats. Digital content can be created, viewed, distributed, modified, listened to, and preserved on a digital electronics device, including digital data storage media (in contrast to analog electronic media) and digital broadcasting. Digital defines as any data represented by a series of digits, and media refers to methods of broadcasting or communicating this information. Together, digital media refers to mediums of digitized information broadcast through a screen and/or a speaker. This also includes text, audio, video, and graphics that are transmitted over the internet for viewing or listening to on the internet.Digital media platforms, such as YouTube, Vimeo, and Twitch, accounted for viewership rates of 27.9 billion hours in 2020. A contributing factor to its part in what is commonly referred to as the digital revolution can be attributed to the use of interconnectivity.
wikipedia
proteinogenic amino acids
In mass spectrometry of peptides and proteins, knowledge of the masses of the residues is useful. The mass of the peptide or protein is the sum of the residue masses plus the mass of water (Monoisotopic mass = 18.01056 Da; average mass = 18.0153 Da). The residue masses are calculated from the tabulated chemical formulas and atomic weights. In mass spectrometry, ions may also include one or more protons (Monoisotopic mass = 1.00728 Da; average mass* = 1.0074 Da). *Protons cannot have an average mass, this confusingly infers to Deuterons as a valid isotope, but they should be a different species (see Hydron (chemistry)) § Monoisotopic mass
wikipedia
direct analysis in real time
In mass spectrometry, direct analysis in real time (DART) is an ion source that produces electronically or vibronically excited-state species from gases such as helium, argon, or nitrogen that ionize atmospheric molecules or dopant molecules. The ions generated from atmospheric or dopant molecules undergo ion-molecule reactions with the sample molecules to produce analyte ions. Analytes with low ionization energy may be ionized directly. The DART ionization process can produce positive or negative ions depending on the potential applied to the exit electrode.
wikipedia
direct analysis in real time
This ionization can occur for species desorbed directly from surfaces such as bank notes, tablets, bodily fluids (blood, saliva and urine), polymers, glass, plant leaves, fruits & vegetables, clothing, and living organisms. DART is applied for rapid analysis of a wide variety of samples at atmospheric pressure and in the open laboratory environment. It does not need a specific sample preparation, so it can be used for the analysis of solid, liquid and gaseous samples in their native state. With the aid of DART, exact mass measurements can be done rapidly with high-resolution mass spectrometers. DART mass spectrometry has been used in pharmaceutical applications, forensic studies, quality control, and environmental studies.
wikipedia
fragmentation pattern
In mass spectrometry, fragmentation is the dissociation of energetically unstable molecular ions formed from passing the molecules mass spectrum. These reactions are well documented over the decades and fragmentation patterns are useful to determine the molar weight and structural information of unknown molecules. Fragmentation that occurs in tandem mass spectrometry experiments has been a recent focus of research, because this data helps facilitate the identification of molecules.
wikipedia
single-cell analysis
In mass spectroscopy based proteomics there are three major steps needed for peptide identification: sample preparation, separation of peptides, and identification of peptides. Several groups have focused on oocytes or very early cleavage-stage cells since these cells are unusually large and provide enough material for analysis. Another approach, single cell proteomics by mass spectrometry (SCoPE-MS) has quantified thousands of proteins in mammalian cells with typical cell sizes (diameter of 10-15 μm) by combining carrier-cells and single-cell barcoding. The second generation, SCoPE2, increased the throughput by automated and miniaturized sample preparation; It also improved quantitative reliability and proteome coverage by data-driven optimization of LC-MS/MS and peptide identification.
wikipedia
single-cell analysis
The sensitivity and consistency of these methods have been further improved by prioritization, and massively parallel sample preparation in nanoliter size droplets. Another direction for single-cell protein analysis is based on a scalable framework of multiplexed data-independent acquisition (plexDIA) enables time saving by parallel analysis of both peptide ions and protein samples, thereby realizing multiplicative gains in throughput.The separation of differently sized proteins can be accomplished by using capillary electrophoresis (CE) or liquid chromatography (LC) (using liquid chromatography with mass spectroscopy is also known as LC-MS).
wikipedia
single-cell analysis
This step gives order to the peptides before quantification using tandem mass-spectroscopy (MS/MS). The major difference between quantification methods is some use labels on the peptides such as tandem mass tags (TMT) or dimethyl labels which are used to identify which cell a certain protein came from (proteins coming from each cell have a different label) while others use not labels (quantify cells individually). The mass spectroscopy data is then analyzed by running data through databases that convert the information about peptides identified to quantification of protein levels. These methods are very similar to those used to quantify the proteome of bulk cells, with modifications to accommodate the very small sample volume.
wikipedia
golden record (informatics)
In master data management (MDM), the golden copy refers to the master data (master version) of the reference data which works as an authoritative source for the "truth" for all applications in a given IT landscape.
wikipedia
glutaraldehyde
In material science glutaraldehyde application areas range from polymers to metals and biomaterials. Glutaraldehyde is commonly used as fixing agent before characterization of biomaterials for microscopy. Glutaraldehyde is a powerful crosslinking agent for many polymers containing primary amine groups.. Glutaraldehdye also can be used for an interlinking agent to improve the adhesion force between two polymeric coatings. Glutaraldehyde is also used to protect against corrosion of undersea pipes.
wikipedia
resilience (materials science)
In material science, resilience is the ability of a material to absorb energy when it is deformed elastically, and release that energy upon unloading. Proof resilience is defined as the maximum energy that can be absorbed up to the elastic limit, without creating a permanent distortion. The modulus of resilience is defined as the maximum energy that can be absorbed per unit volume without creating a permanent distortion.
wikipedia
resilience (materials science)
It can be calculated by integrating the stress–strain curve from zero to the elastic limit. In uniaxial tension, under the assumptions of linear elasticity, U r = σ y 2 2 E = σ y ε y 2 {\displaystyle U_{r}={\frac {\sigma _{y}^{2}}{2E}}={\frac {\sigma _{y}\varepsilon _{y}}{2}}} where Ur is the modulus of resilience, σy is the yield strength, εy is the yield strain, and E is the Young's modulus. This analysis is not valid for non-linear elastic materials like rubber, for which the approach of area under the curve until elastic limit must be used.
wikipedia
abc analysis
In materials management, ABC analysis is an inventory categorisation technique. ABC analysis divides an inventory into three categories—"A items" with very tight control and accurate records, "B items" with less tightly controlled and good records, and "C items" with the simplest controls possible and minimal records. The ABC analysis provides a mechanism for identifying items that will have a significant impact on overall inventory cost, while also providing a mechanism for identifying different categories of stock that will require different management and controls. The ABC analysis suggests that inventories of an organization are not of equal value.
wikipedia
abc analysis
Thus, the inventory is grouped into three categories (A, B, and C) in order of their estimated importance. 'A' items are very important for an organization.
wikipedia
abc analysis
Because of the high value of these 'A' items, frequent value analysis is required. In addition to that, an organization needs to choose an appropriate order pattern (e.g. 'just-in-time') to avoid excess capacity. 'B' items are important, but of course less important than 'A' items and more important than 'C' items. Therefore, 'B' items are intergroup items. 'C' items are marginally important.
wikipedia
functionally graded material
In materials science Functionally Graded Materials (FGMs) may be characterized by the variation in composition and structure gradually over volume, resulting in corresponding changes in the properties of the material. The materials can be designed for specific function and applications. Various approaches based on the bulk (particulate processing), preform processing, layer processing and melt processing are used to fabricate the functionally graded materials.
wikipedia
yield (engineering)
In materials science and engineering, the yield point is the point on a stress-strain curve that indicates the limit of elastic behavior and the beginning of plastic behavior. Below the yield point, a material will deform elastically and will return to its original shape when the applied stress is removed. Once the yield point is passed, some fraction of the deformation will be permanent and non-reversible and is known as plastic deformation. The yield strength or yield stress is a material property and is the stress corresponding to the yield point at which the material begins to deform plastically.
wikipedia
yield (engineering)
The yield strength is often used to determine the maximum allowable load in a mechanical component, since it represents the upper limit to forces that can be applied without producing permanent deformation. In some materials, such as aluminium, there is a gradual onset of non-linear behavior, and no precise yield point. In such a case, the offset yield point (or proof stress) is taken as the stress at which 0.2% plastic deformation occurs.
wikipedia
yield (engineering)
Yielding is a gradual failure mode which is normally not catastrophic, unlike ultimate failure. In solid mechanics, the yield point can be specified in terms of the three-dimensional principal stresses ( σ 1 , σ 2 , σ 3 {\displaystyle \sigma _{1},\sigma _{2},\sigma _{3}} ) with a yield surface or a yield criterion. A variety of yield criteria have been developed for different materials.
wikipedia
functionally graded element
In materials science and mathematics, functionally graded elements are elements used in finite element analysis. They can be used to describe a functionally graded material.
wikipedia
work hardened
In materials science parlance, dislocations are defined as line defects in a material's crystal structure. The bonds surrounding the dislocation are already elastically strained by the defect compared to the bonds between the constituents of the regular crystal lattice. Therefore, these bonds break at relatively lower stresses, leading to plastic deformation. The strained bonds around a dislocation are characterized by lattice strain fields.
wikipedia
work hardened
For example, there are compressively strained bonds directly next to an edge dislocation and tensilely strained bonds beyond the end of an edge dislocation. These form compressive strain fields and tensile strain fields, respectively. Strain fields are analogous to electric fields in certain ways.
wikipedia
work hardened
Specifically, the strain fields of dislocations obey similar laws of attraction and repulsion; in order to reduce overall strain, compressive strains are attracted to tensile strains, and vice versa. The visible (macroscopic) results of plastic deformation are the result of microscopic dislocation motion. For example, the stretching of a steel rod in a tensile tester is accommodated through dislocation motion on the atomic scale.
wikipedia
sandwich structured composite
In materials science, a sandwich-structured composite is a special class of composite materials that is fabricated by attaching two thin-but-stiff skins to a lightweight but thick core. The core material is normally low strength, but its higher thickness provides the sandwich composite with high bending stiffness with overall low density. Open- and closed-cell-structured foams like polyethersulfone, polyvinylchloride, polyurethane, polyethylene or polystyrene foams, balsa wood, syntactic foams, and honeycombs are commonly used core materials.
wikipedia
sandwich structured composite
Sometimes, the honeycomb structure is filled with other foams for added strength. Open- and closed-cell metal foam can also be used as core materials. Laminates of glass or carbon fiber-reinforced thermoplastics or mainly thermoset polymers (unsaturated polyesters, epoxies...) are widely used as skin materials. Sheet metal is also used as skin material in some cases. The core is bonded to the skins with an adhesive or with metal components by brazing together.
wikipedia
advanced composite materials (engineering)
In materials science, advanced composite materials (ACMs) are materials that are generally characterized by unusually high strength fibres with unusually high stiffness, or modulus of elasticity characteristics, compared to other materials, while bound together by weaker matrices. These are termed "advanced composite materials" in comparison to the composite materials commonly in use such as reinforced concrete, or even concrete itself. The high strength fibers are also low density while occupying a large fraction of the volume.
wikipedia
advanced composite materials (engineering)
Advanced composites exhibit desirable physical and chemical properties that include light weight coupled with high stiffness (elasticity), and strength along the direction of the reinforcing fiber, dimensional stability, temperature and chemical resistance, flex performance, and relatively easy processing. Advanced composites are replacing metal components in many uses, particularly in the aerospace industry. Composites are classified according to their matrix phase.
wikipedia
advanced composite materials (engineering)
These classifications are polymer matrix composites (PMCs), ceramic matrix composites (CMCs), and metal matrix composites (MMCs). Also, materials within these categories are often called "advanced" if they combine the properties of high (axial, longitudinal) strength values and high (axial, longitudinal) stiffness values, with low weight, corrosion resistance, and in some cases special electrical properties. Advanced composite materials have broad, proven applications, in the aircraft, aerospace, and sports equipment sectors.
wikipedia
advanced composite materials (engineering)
Even more specifically ACMs are very attractive for aircraft and aerospace structural parts. ACMs have been developing for NASA's Advanced Space Transportation Program, armor protection for Army aviation and the Federal Aviation Administration of the USA, and high-temperature shafting for the Comanche helicopter. Additionally, ACMs have a decades long history in military and government aerospace industries. However, much of the technology is new and not presented formally in secondary or undergraduate education, and the technology of advanced composites manufacture is continually evolving.
wikipedia
intrinsic properties
In materials science, an intrinsic property is independent of how much of a material is present and is independent of the form of the material, e.g., one large piece or a collection of small particles. Intrinsic properties are dependent mainly on the fundamental chemical composition and structure of the material. Extrinsic properties are differentiated as being dependent on the presence of avoidable chemical contaminants or structural defects.In biology, intrinsic effects originate from inside an organism or cell, such as an autoimmune disease or intrinsic immunity. In electronics and optics, intrinsic properties of devices (or systems of devices) are generally those that are free from the influence of various types of non-essential defects. Such defects may arise as a consequence of design imperfections, manufacturing errors, or operational extremes and can produce distinctive and often undesirable extrinsic properties. The identification, optimization, and control of both intrinsic and extrinsic properties are among the engineering tasks necessary to achieve the high performance and reliability of modern electrical and optical systems.
wikipedia
asperity (material science)
In materials science, asperity, defined as "unevenness of surface, roughness, ruggedness" (from the Latin asper—"rough"), has implications (for example) in physics and seismology. Smooth surfaces, even those polished to a mirror finish, are not truly smooth on a microscopic scale. They are rough, with sharp, rough or rugged projections, termed "asperities". Surface asperities exist across multiple scales, often in a self affine or fractal geometry.
wikipedia
asperity (material science)
The fractal dimension of these structures has been correlated with the contact mechanics exhibited at an interface in terms of friction and contact stiffness. When two macroscopically smooth surfaces come into contact, initially they only touch at a few of these asperity points. These cover only a very small portion of the surface area.
wikipedia
asperity (material science)
Friction and wear originate at these points, and thus understanding their behavior becomes important when studying materials in contact. When the surfaces are subjected to a compressive load, the asperities deform through elastic and plastic modes, increasing the contact area between the two surfaces until the contact area is sufficient to support the load. The relationship between frictional interactions and asperity geometry is complex and poorly understood. It has been reported that an increased roughness may under certain circumstances result in weaker frictional interactions while smoother surfaces may in fact exhibit high levels of friction owing to high levels of true contact.The Archard equation provides a simplified model of asperity deformation when materials in contact are subject to a force. Due to the ubiquitous presence of deformable asperities in self affine hierarchical structures, the true contact area at an interface exhibits a linear relationship with the applied normal load.
wikipedia
dispersion (materials science)
In materials science, dispersion is the fraction of atoms of a material exposed to the surface. In general, D = NS/N, where D is the dispersion, NS is the number of surface atoms and NT is the total number of atoms of the material. It is an important concept in heterogeneous catalysis, since only atoms exposed to the surface can affect catalytic surface reactions. Dispersion increases with decreasing crystallite size and approaches unity at a crystallite diameter of about 0.1 nm.
wikipedia
lamellar structure
In materials science, lamellar structures or microstructures are composed of fine, alternating layers of different materials in the form of lamellae. They are often observed in cases where a phase transition front moves quickly, leaving behind two solid products, as in rapid cooling of eutectic (such as solder) or eutectoid (such as pearlite) systems. Such conditions force phases of different composition to form but allow little time for diffusion to produce those phases' equilibrium compositions. Fine lamellae solve this problem by shortening the diffusion distance between phases, but their high surface energy makes them unstable and prone to break up when annealing allows diffusion to progress.
wikipedia
lamellar structure
A deeper eutectic or more rapid cooling will result in finer lamellae; as the size of an individual lamellum approaches zero, the system will instead retain its high-temperature structure. Two common cases of this include cooling a liquid to form an amorphous solid, and cooling eutectoid austenite to form martensite. In biology, normal adult bones possess a lamellar structure which may be disrupted by some diseases. == References ==
wikipedia
material failure theory
In materials science, material failure is the loss of load carrying capacity of a material unit. This definition introduces to the fact that material failure can be examined in different scales, from microscopic, to macroscopic. In structural problems, where the structural response may be beyond the initiation of nonlinear material behaviour, material failure is of profound importance for the determination of the integrity of the structure. On the other hand, due to the lack of globally accepted fracture criteria, the determination of the structure's damage, due to material failure, is still under intensive research.
wikipedia
permeance
In materials science, permeance is the degree to which a material transmits another substance.
wikipedia
quenching
In materials science, quenching is the rapid cooling of a workpiece in water, oil, polymer, air, or other fluids to obtain certain material properties. A type of heat treating, quenching prevents undesired low-temperature processes, such as phase transformations, from occurring. It does this by reducing the window of time during which these undesired reactions are both thermodynamically favorable, and kinetically accessible; for instance, quenching can reduce the crystal grain size of both metallic and plastic materials, increasing their hardness. In metallurgy, quenching is most commonly used to harden steel by inducing a martensite transformation, where the steel must be rapidly cooled through its eutectoid point, the temperature at which austenite becomes unstable.
wikipedia
quenching
In steel alloyed with metals such as nickel and manganese, the eutectoid temperature becomes much lower, but the kinetic barriers to phase transformation remain the same. This allows quenching to start at a lower temperature, making the process much easier. High-speed steel also has added tungsten, which serves to raise kinetic barriers, which among other effects gives material properties (hardness and abrasion resistance) as though the workpiece had been cooled more rapidly than it really has. Even cooling such alloys slowly in air has most of the desired effects of quenching; high-speed steel weakens much less from heat cycling due to high-speed cutting.Extremely rapid cooling can prevent the formation of all crystal structures, resulting in amorphous metal or "metallic glass".
wikipedia
slip (materials science)
In materials science, slip is the large displacement of one part of a crystal relative to another part along crystallographic planes and directions. Slip occurs by the passage of dislocations on close/packed planes, which are planes containing the greatest number of atoms per area and in close-packed directions (most atoms per length). Close-packed planes are known as slip or glide planes.
wikipedia
slip (materials science)
A slip system describes the set of symmetrically identical slip planes and associated family of slip directions for which dislocation motion can easily occur and lead to plastic deformation. The magnitude and direction of slip are represented by the Burgers vector, b. An external force makes parts of the crystal lattice glide along each other, changing the material's geometry. A critical resolved shear stress is required to initiate a slip.
wikipedia
zener–hollomon parameter
In materials science, the Zener–Hollomon parameter, typically denoted as Z, is used to relate changes in temperature or strain-rate to the stress-strain behavior of a material. It has been most extensively applied to the forming of steels at increased temperature, when creep is active. It is given by Z = ε ˙ exp ⁡ ( Q / R T ) {\displaystyle Z={\dot {\varepsilon }}\exp(Q/RT)} where ε ˙ {\textstyle {\dot {\varepsilon }}} is the strain rate, Q is the activation energy, R is the gas constant, and T is the temperature. The Zener–Hollomon parameter is also known as the temperature compensated strain rate, since the two are inversely proportional in the definition.
wikipedia
zener–hollomon parameter
It is named after Clarence Zener and John Herbert Hollomon, Jr. who established the formula based on the stress-strain behavior in steel.
wikipedia
zener–hollomon parameter
When plastically deforming a material, the flow stress depends heavily on both the strain-rate and temperature. During forming processes, Z may help determine appropriate changes in strain-rate or temperature when the other variable is altered, in order to keep material flowing properly. Z has also been applied to some metals over a large range of strain rates and temperatures and shown comparable microstructures at the end-of-processing, as long as Z remained similar. This is because the relative activity of various deformation mechanisms is typically inversely proportional to temperature or strain-rate, such that decreasing strain rate or increasing temperature will increase Z and promote plastic deformation.
wikipedia
threshold displacement energy
In materials science, the threshold displacement energy (Td) is the minimum kinetic energy that an atom in a solid needs to be permanently displaced from its site in the lattice to a defect position. It is also known as "displacement threshold energy" or just "displacement energy". In a crystal, a separate threshold displacement energy exists for each crystallographic direction.
wikipedia
threshold displacement energy
Then one should distinguish between the minimum (Td,min) and average (Td,ave) over all lattice directions' threshold displacement energies. In amorphous solids, it may be possible to define an effective displacement energy to describe some other average quantity of interest. Threshold displacement energies in typical solids are of the order of 10-50 eV.
wikipedia
formative evaluation
In math education, it is important for teachers to see how their students approach the problems and how much mathematical knowledge and at what level students use when solving the problems. That is, knowing how students think in the process of learning or problem solving makes it possible for teachers to help their students overcome conceptual difficulties and, in turn, improve learning. In that sense, formative assessment is diagnostic. To employ formative assessment in the classrooms, a teacher has to make sure that each student participates in the learning process by expressing their ideas; there is a trustful environment in which students can provide each other with feedback; s/he (the teacher) provides students with feedback; and the instruction is modified according to students' needs. In math classes, thought revealing activities such as model-eliciting activities (MEAs) and generative activities provide good opportunities for covering these aspects of formative assessment.
wikipedia
sigma field
In mathematical analysis and in probability theory, a σ-algebra (also σ-field) on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair ( X , Σ ) {\displaystyle (X,\Sigma )} is called a measurable space. The σ-algebras are a subset of the set algebras; elements of the latter only need to be closed under the union or intersection of finitely many subsets, which is a weaker condition.The main use of σ-algebras is in the definition of measures; specifically, the collection of those subsets for which a given measure is defined is necessarily a σ-algebra. This concept is important in mathematical analysis as the foundation for Lebesgue integration, and in probability theory, where it is interpreted as the collection of events which can be assigned probabilities.
wikipedia
sigma field
Also, in probability, σ-algebras are pivotal in the definition of conditional expectation. In statistics, (sub) σ-algebras are needed for the formal mathematical definition of a sufficient statistic, particularly when the statistic is a function or a random process and the notion of conditional density is not applicable. If X = { a , b , c , d } {\displaystyle X=\{a,b,c,d\}} one possible σ-algebra on X {\displaystyle X} is Σ = { ∅ , { a , b } , { c , d } , { a , b , c , d } } , {\displaystyle \Sigma =\{\varnothing ,\{a,b\},\{c,d\},\{a,b,c,d\}\},} where ∅ {\displaystyle \varnothing } is the empty set.
wikipedia
sigma field
In general, a finite algebra is always a σ-algebra. If { A 1 , A 2 , A 3 , … } , {\displaystyle \{A_{1},A_{2},A_{3},\ldots \},} is a countable partition of X {\displaystyle X} then the collection of all unions of sets in the partition (including the empty set) is a σ-algebra. A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding in all countable unions, countable intersections, and relative complements and continuing this process (by transfinite iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as the Borel hierarchy).
wikipedia
multi-variable function
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article. The domain of a function of n variables is the subset of R n {\displaystyle \mathbb {R} ^{n}} for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of R n {\displaystyle \mathbb {R} ^{n}} .
wikipedia
bernstein's theorem (polynomials)
In mathematical analysis, Bernstein's inequality states that on the complex plane, within the disk of radius 1, the degree of a polynomial times the maximum value of a polynomial is an upper bound for the similar maximum of its derivative. Taking the k-th derivative of the theorem, max | z | ≤ 1 ( | P ( k ) ( z ) | ) ≤ n ! ( n − k ) ! ⋅ max | z | ≤ 1 ( | P ( z ) | ) . {\displaystyle \max _{|z|\leq 1}(|P^{(k)}(z)|)\leq {\frac {n!}{(n-k)! }}\cdot \max _{|z|\leq 1}(|P(z)|).}
wikipedia
clairaut's equation
In mathematical analysis, Clairaut's equation (or the Clairaut equation) is a differential equation of the form y ( x ) = x d y d x + f ( d y d x ) {\displaystyle y(x)=x{\frac {dy}{dx}}+f\left({\frac {dy}{dx}}\right)} where f {\displaystyle f} is continuously differentiable. It is a particular case of the Lagrange differential equation. It is named after the French mathematician Alexis Clairaut, who introduced it in 1734.
wikipedia
ivar ekeland
In mathematical analysis, Ekeland's variational principle, discovered by Ivar Ekeland, is a theorem that asserts that there exist a nearly optimal solution to a class of optimization problems.Ekeland's variational principle can be used when the lower level set of a minimization problem is not compact, so that the Bolzano–Weierstrass theorem can not be applied. Ekeland's principle relies on the completeness of the metric space.Ekeland's principle leads to a quick proof of the Caristi fixed point theorem.Ekeland was associated with the University of Paris when he proposed this theorem.
wikipedia
fubini's theorem
In mathematical analysis, Fubini's theorem is a result that gives conditions under which it is possible to compute a double integral by using an iterated integral, introduced by Guido Fubini in 1907. One may switch the order of integration if the double integral yields a finite answer when the integrand is replaced by its absolute value. Fubini's theorem implies that two iterated integrals are equal to the corresponding double integral across its integrands.
wikipedia
fubini's theorem
Tonelli's theorem, introduced by Leonida Tonelli in 1909, is similar, but applies to a non-negative measurable function rather than one integrable over their domains. A related theorem is often called Fubini's theorem for infinite series, which states that if { a m , n } m = 1 , n = 1 ∞ {\textstyle \{a_{m,n}\}_{m=1,n=1}^{\infty }} is a doubly-indexed sequence of real numbers, and if ∑ ( m , n ) ∈ N × N a m , n {\textstyle \sum _{(m,n)\in \mathbb {N} \times \mathbb {N} }a_{m,n}} is absolutely convergent, then ∑ ( m , n ) ∈ N × N a m , n = ∑ m = 1 ∞ ∑ n = 1 ∞ a m , n = ∑ n = 1 ∞ ∑ m = 1 ∞ a m , n {\displaystyle \sum _{(m,n)\in \mathbb {N} \times \mathbb {N} }a_{m,n}=\sum _{m=1}^{\infty }\sum _{n=1}^{\infty }a_{m,n}=\sum _{n=1}^{\infty }\sum _{m=1}^{\infty }a_{m,n}} Although Fubini's theorem for infinite series is a special case of the more general Fubini's theorem, it is not appropriate to characterize it as a logical consequence of Fubini's theorem. This is because some properties of measures, in particular sub-additivity, are often proved using Fubini's theorem for infinite series. In this case, Fubini's general theorem is a logical consequence of Fubini's theorem for infinite series.
wikipedia
glaeser's continuity theorem
In mathematical analysis, Glaeser's continuity theorem is a characterization of the continuity of the derivative of the square roots of functions of class C 2 {\displaystyle C^{2}} . It was introduced in 1963 by Georges Glaeser, and was later simplified by Jean Dieudonné.The theorem states: Let f: U → R 0 + {\displaystyle f\ :\ U\rightarrow \mathbb {R} _{0}^{+}} be a function of class C 2 {\displaystyle C^{2}} in an open set U contained in R n {\displaystyle \mathbb {R} ^{n}} , then f {\displaystyle {\sqrt {f}}} is of class C 1 {\displaystyle C^{1}} in U if and only if its partial derivatives of first and second order vanish in the zeros of f. == References ==
wikipedia
haar's tauberian theorem
In mathematical analysis, Haar's Tauberian theorem named after Alfréd Haar, relates the asymptotic behaviour of a continuous function to properties of its Laplace transform. It is related to the integral formulation of the Hardy–Littlewood Tauberian theorem.
wikipedia
leonid kantorovich
In mathematical analysis, Kantorovich had important results in functional analysis, approximation theory, and operator theory. In particular, Kantorovich formulated some fundamental results in the theory of normed vector lattices, especially in Dedekind complete vector lattices called "K-spaces" which are now referred to as "Kantorovich spaces" in his honor. Kantorovich showed that functional analysis could be used in the analysis of iterative methods, obtaining the Kantorovich inequalities on the convergence rate of the gradient method and of Newton's method (see the Kantorovich theorem). Kantorovich considered infinite-dimensional optimization problems, such as the Kantorovich-Monge problem in transport theory. His analysis proposed the Kantorovich–Rubinstein metric, which is used in probability theory, in the theory of the weak convergence of probability measures.
wikipedia
lipschitz function
In mathematical analysis, Lipschitz continuity, named after German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast it can change: there exists a real number such that, for every pair of points on the graph of this function, the absolute value of the slope of the line connecting them is not greater than this real number; the smallest such bound is called the Lipschitz constant of the function (and is related to the modulus of uniform continuity). For instance, every function that is defined on an interval and has bounded first derivative is Lipschitz continuous.In the theory of differential equations, Lipschitz continuity is the central condition of the Picard–Lindelöf theorem which guarantees the existence and uniqueness of the solution to an initial value problem. A special type of Lipschitz continuity, called contraction, is used in the Banach fixed-point theorem.We have the following chain of strict inclusions for functions over a closed and bounded non-trivial interval of the real line: Continuously differentiable ⊂ Lipschitz continuous ⊂ α {\displaystyle \alpha } -Hölder continuous,where 0 < α ≤ 1 {\displaystyle 0<\alpha \leq 1} . We also have Lipschitz continuous ⊂ absolutely continuous ⊂ uniformly continuous.
wikipedia
netto's theorem
In mathematical analysis, Netto's theorem states that continuous bijections of smooth manifolds preserve dimension. That is, there does not exist a continuous bijection between two smooth manifolds of different dimension. It is named after Eugen Netto.The case for maps from a higher-dimensional manifold to a one-dimensional manifold was proven by Jacob Lüroth in 1878, using the intermediate value theorem to show that no manifold containing a topological circle can be mapped continuously and bijectively to the real line. Both Netto in 1878, and Georg Cantor in 1879, gave faulty proofs of the general theorem.
wikipedia
netto's theorem
The faults were later recognized and corrected.An important special case of this theorem concerns the non-existence of continuous bijections from one-dimensional spaces, such as the real line or unit interval, to two-dimensional spaces, such as the Euclidean plane or unit square. The conditions of the theorem can be relaxed in different ways to obtain interesting classes of functions from one-dimensional spaces to two-dimensional spaces: Space-filling curves are surjective continuous functions from one-dimensional spaces to two-dimensional spaces. They cover every point of the plane, or of a unit square, by the image of a line or unit interval.
wikipedia
netto's theorem
Examples include the Peano curve and Hilbert curve. Neither of these examples has any self-crossings, but by Netto's theorem there are many points of the square that are covered multiple times by these curves. Osgood curves are continuous bijections from one-dimensional spaces to subsets of the plane that have nonzero area.
wikipedia
netto's theorem
They form Jordan curves in the plane. However, by Netto's theorem, they cannot cover the entire plane, unit square, or any other two-dimensional region.
wikipedia
netto's theorem
If one relaxes the requirement of continuity, then all smooth manifolds of bounded dimension have equal cardinality, the cardinality of the continuum. Therefore, there exist discontinuous bijections between any two of them, as Georg Cantor showed in 1878. Cantor's result came as a surprise to many mathematicians and kicked off the line of research leading to space-filling curves, Osgood curves, and Netto's theorem.
wikipedia
netto's theorem
A near-bijection from the unit square to the unit interval can be obtained by interleaving the digits of the decimal representations of the Cartesian coordinates of points in the square. The ambiguities of decimal, exemplified by the two decimal representations of 1 = 0.999..., cause this to be an injection rather than a bijection, but this issue can be repaired by using the Schröder–Bernstein theorem. == References ==
wikipedia
rademacher's theorem
In mathematical analysis, Rademacher's theorem, named after Hans Rademacher, states the following: If U is an open subset of Rn and f: U → Rm is Lipschitz continuous, then f is differentiable almost everywhere in U; that is, the points in U at which f is not differentiable form a set of Lebesgue measure zero. Differentiability here refers to infinitesimal approximability by a linear map, which in particular asserts the existence of the coordinate-wise partial derivatives.
wikipedia
equality of mixed partials
In mathematical analysis, Schwarz's theorem (or Clairaut's theorem on equality of mixed partials) named after Alexis Clairaut and Hermann Schwarz, states that for a function f: Ω → R {\displaystyle f\colon \Omega \to \mathbb {R} } defined on a set Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} , if p ∈ R n {\displaystyle \mathbf {p} \in \mathbb {R} ^{n}} is a point such that some neighborhood of p {\displaystyle \mathbf {p} } is contained in Ω {\displaystyle \Omega } and f {\displaystyle f} has continuous second partial derivatives on that neighborhood of p {\displaystyle \mathbf {p} } , then for all i and j in { 1 , 2 … , n } , {\displaystyle \{1,2\ldots ,\,n\},} ∂ 2 ∂ x i ∂ x j f ( p ) = ∂ 2 ∂ x j ∂ x i f ( p ) . {\displaystyle {\frac {\partial ^{2}}{\partial x_{i}\,\partial x_{j}}}f(\mathbf {p} )={\frac {\partial ^{2}}{\partial x_{j}\,\partial x_{i}}}f(\mathbf {p} ).} The partial derivatives of this function commute at that point. One easy way to establish this theorem (in the case where n = 2 {\displaystyle n=2} , i = 1 {\displaystyle i=1} , and j = 2 {\displaystyle j=2} , which readily entails the result in general) is by applying Green's theorem to the gradient of f .
wikipedia
equality of mixed partials
{\displaystyle f.} An elementary proof for functions on open subsets of the plane is as follows (by a simple reduction, the general case for the theorem of Schwarz easily reduces to the planar case). Let f ( x , y ) {\displaystyle f(x,y)} be a differentiable function on an open rectangle Ω {\displaystyle \Omega } containing a point ( a , b ) {\displaystyle (a,b)} and suppose that d f {\displaystyle df} is continuous with continuous ∂ x ∂ y f {\displaystyle \partial _{x}\partial _{y}f} and ∂ y ∂ x f {\displaystyle \partial _{y}\partial _{x}f} over Ω .
wikipedia
equality of mixed partials
{\displaystyle \Omega .} Define u ( h , k ) = f ( a + h , b + k ) − f ( a + h , b ) , v ( h , k ) = f ( a + h , b + k ) − f ( a , b + k ) , w ( h , k ) = f ( a + h , b + k ) − f ( a + h , b ) − f ( a , b + k ) + f ( a , b ) . {\displaystyle {\begin{aligned}u\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a+h,\,b\right),\\v\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a,\,b+k\right),\\w\left(h,\,k\right)&=f\left(a+h,\,b+k\right)-f\left(a+h,\,b\right)-f\left(a,\,b+k\right)+f\left(a,\,b\right).\end{aligned}}} These functions are defined for | h | , | k | < ε {\displaystyle \left|h\right|,\,\left|k\right|<\varepsilon } , where ε > 0 {\displaystyle \varepsilon >0} and × {\displaystyle \left\times \left} is contained in Ω .
wikipedia
equality of mixed partials
{\displaystyle {\begin{aligned}hk\,\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)&=hk\,\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right),\\\partial _{y}\partial _{x}f\left(a+\theta h,\,b+\theta ^{\prime }k\right)&=\partial _{x}\partial _{y}f\left(a+\phi ^{\prime }h,\,b+\phi k\right).\end{aligned}}} Letting h , k {\displaystyle h,\,k} tend to zero in the last equality, the continuity assumptions on ∂ y ∂ x f {\displaystyle \partial _{y}\partial _{x}f} and ∂ x ∂ y f {\displaystyle \partial _{x}\partial _{y}f} now imply that ∂ 2 ∂ x ∂ y f ( a , b ) = ∂ 2 ∂ y ∂ x f ( a , b ) . {\displaystyle {\frac {\partial ^{2}}{\partial x\partial y}}f\left(a,\,b\right)={\frac {\partial ^{2}}{\partial y\partial x}}f\left(a,\,b\right).} This account is a straightforward classical method found in many text books, for example in Burkill, Apostol and Rudin.Although the derivation above is elementary, the approach can also be viewed from a more conceptual perspective so that the result becomes more apparent.
wikipedia
equality of mixed partials
Indeed the difference operators Δ x t , Δ y t {\displaystyle \Delta _{x}^{t},\,\,\Delta _{y}^{t}} commute and Δ x t f , Δ y t f {\displaystyle \Delta _{x}^{t}f,\,\,\Delta _{y}^{t}f} tend to ∂ x f , ∂ y f {\displaystyle \partial _{x}f,\,\,\partial _{y}f} as t {\displaystyle t} tends to 0, with a similar statement for second order operators. Here, for z {\displaystyle z} a vector in the plane and u {\displaystyle u} a directional vector ( 1 0 ) {\displaystyle {\tbinom {1}{0}}} or ( 0 1 ) {\displaystyle {\tbinom {0}{1}}} , the difference operator is defined by Δ u t f ( z ) = f ( z + t u ) − f ( z ) t . {\displaystyle \Delta _{u}^{t}f(z)={f(z+tu)-f(z) \over t}.}
wikipedia
equality of mixed partials
By the fundamental theorem of calculus for C 1 {\displaystyle C^{1}} functions f {\displaystyle f} on an open interval I {\displaystyle I} with ( a , b ) ⊂ I {\displaystyle (a,b)\subset I} ∫ a b f ′ ( x ) d x = f ( b ) − f ( a ) . {\displaystyle \int _{a}^{b}f^{\prime }(x)\,dx=f(b)-f(a).} Hence | f ( b ) − f ( a ) | ≤ ( b − a ) sup c ∈ ( a , b ) | f ′ ( c ) | {\displaystyle |f(b)-f(a)|\leq (b-a)\,\sup _{c\in (a,b)}|f^{\prime }(c)|} .This is a generalized version of the mean value theorem.
wikipedia
equality of mixed partials
Recall that the elementary discussion on maxima or minima for real-valued functions implies that if f {\displaystyle f} is continuous on {\displaystyle } and differentiable on ( a , b ) {\displaystyle (a,b)} , then there is a point c {\displaystyle c} in ( a , b ) {\displaystyle (a,b)} such that f ( b ) − f ( a ) b − a = f ′ ( c ) . {\displaystyle {f(b)-f(a) \over b-a}=f^{\prime }(c).} For vector-valued functions with V {\displaystyle V} a finite-dimensional normed space, there is no analogue of the equality above, indeed it fails.
wikipedia
equality of mixed partials
{\displaystyle \left|\Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})-D_{1}D_{2}f(x_{0},y_{0})\right|\leq \sup _{0\leq s\leq 1}\left|\Delta _{1}^{t}D_{2}f(x_{0},y_{0}+ts)-D_{1}D_{2}f(x_{0},y_{0})\right|\leq \sup _{0\leq r,s\leq 1}\left|D_{1}D_{2}f(x_{0}+tr,y_{0}+ts)-D_{1}D_{2}f(x_{0},y_{0})\right|.} Thus Δ 1 t Δ 2 t f ( x 0 , y 0 ) {\displaystyle \Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})} tends to D 1 D 2 f ( x 0 , y 0 ) {\displaystyle D_{1}D_{2}f(x_{0},y_{0})} as t {\displaystyle t} tends to 0. The same argument shows that Δ 2 t Δ 1 t f ( x 0 , y 0 ) {\displaystyle \Delta _{2}^{t}\Delta _{1}^{t}f(x_{0},y_{0})} tends to D 2 D 1 f ( x 0 , y 0 ) {\displaystyle D_{2}D_{1}f(x_{0},y_{0})} .
wikipedia
equality of mixed partials
Hence, since the difference operators commute, so do the partial differential operators D 1 {\displaystyle D_{1}} and D 2 {\displaystyle D_{2}} , as claimed.Remark. By two applications of the classical mean value theorem, Δ 1 t Δ 2 t f ( x 0 , y 0 ) = D 1 D 2 f ( x 0 + t θ , y 0 + t θ ′ ) {\displaystyle \Delta _{1}^{t}\Delta _{2}^{t}f(x_{0},y_{0})=D_{1}D_{2}f(x_{0}+t\theta ,y_{0}+t\theta ^{\prime })} for some θ {\displaystyle \theta } and θ ′ {\displaystyle \theta ^{\prime }} in ( 0 , 1 ) {\displaystyle (0,1)} . Thus the first elementary proof can be reinterpreted using difference operators. Conversely, instead of using the generalized mean value theorem in the second proof, the classical mean valued theorem could be used.
wikipedia
tannery's theorem
In mathematical analysis, Tannery's theorem gives sufficient conditions for the interchanging of the limit and infinite summation operations. It is named after Jules Tannery.
wikipedia
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
15