Search Documents
Search Again
Search Again
Refine Search
Refine Search
-
Minerals Beneficiation - Particle-Size Measurement and ControlBy U. N. Bhrany, J. H. Brown
The specifications of particle size and the size analyses of fine particulate materials are commonly presented without reference to the method of analysis. A review of the various sizing methods showed that not only numerous sources of error but also different definitions of size are inherent in the different methods of analysis. To demonstrate the magnitude of the differences, the size distribution of finely crushed quartz was determined by several methods and a critical comparison of the data was made. The experimental results show that appreciably different sizes may be indicated for one material depending on the method of analysis used. However, because the shape of the size-distribution curves determined by almost all methods was constant, a simple method of correlating the data can be adopted. Conversion factors, by which size measurements obtained by the various methods can be expressed in terms of screen sizes, are presented as one method of correlating the results. The size analysis and size specification for lump rock and particulate materials are commonly determined with sieves. With this technique, a readily reproducible measure of size can be obtained merely by duplicating the sieve and the method of sizing. For coarse materials, such as lumps of rock, this technique is simple and effective; and the effects of properties of the material such as shape can be readily understood. When very small particles are measured, however, problems arise. Small sieves are difficult to construct and maintain, and size separations become sensitive to the analytical procedure. For these difficult sieve sizes and for still smaller particles, other sizing techniques employing, for example, sedimentation, light extinction, and microscopy are available. Because these various techniques employ different size-dependent responses to establish size, it is appropriate to examine them and the results they yield to determine their relationships and their limitations. The 'size' of a lump of rock is not an absolute quantity, but rather a defined one. With sieves, for example, the size is defined by the screen aperture and no reference is made to the actual shape of the particle, yet only in the case of a regular body, such as a sphere, can the sieve size be related to the shape. With sieves only two dimensions of a particle can be suggested, for the sieve makes no allowance for the length of the particle passing through an aperture. Similarly, the size of a particle as measured by a microscopic technique need bear no relationship to the size as measured by sieves, although the two measurements may be related statistically. In fact, all size measurements are not only defined quantities, but the measurement actually used is an average that is controlled by the technique employed and, hopefully, reflects the property in question of the material of interest. Many papers have been presented that illustrate the accuracy of the various sizing methods, present the advantages of certain methods for certain applications, and describe the principles and technical backgrounds of the various methods. No attempt will be made in the present paper to review this work in detail. The purpose of the present paper is to examine the limitations of the various sizing methods, and to demonstrate experimentally the relationships between the sizes as determined by the various methods of analysis. It is hoped that by recognition of these limitations and relationships, errors in size specification and size interpretation will be avoided. REVIEW OF SIZING METHODS In specifying the size of a material, not only the data but also the method of measurement must be stated. For example, the size of the lump might be specified as its total volume, as its weight, as its maximum dimension, or as its minimum cross section; any one definition might be completely satisfactory for one problem, but not for another. In practice, size analyses follow four basic approaches and fortunately, the conditions that restrict each approach are known. These major methods include: 1) screen analysis, 2) direct measurement of particle dimensions, 3) determination of equivalent spherical size in response to fluid flow, and 4) specific surface determination. Each of these techniques has several variations and some new techniques are also available, but the limitations of the basic techniques are sufficient to permit at least a preliminary
Jan 1, 1962
-
Extractive Metallurgy Division - Continuous Ion ExchangeBy R. McNeill, D. E. Weiss, E. A. Swinton
In a continuous countercurrent exchange process, an alteration in any one of the operating conditions has a complex effect on the others, which can only be predicted by employing the transfer unit or the theoretical stage theory on a basis of trial and error. A simple method is described for illustrating diagrammatically the behavior of a counter-current system, the equations being simplified by means of a concept the maximum hypothetical exchange performance. An example based on a typical metallurgical system is given, in which a divalent metal is recovered from a dilute solution, the resin being regenerated continuously by a monovalent ion. Useful conclusions are drawn from a study of the theory. Practical methods for performing continuous ion exchange are discussed, and the development of equipment based on modified ore dressing jigs is described. A swinging sieve jig contactor is evaluated experimentally. DURING the last decade, the new synthetic ion exchange resins have been applied extensively in industries outside the field of water treatment, but there is no record of a continuous counter-current process operating on an industrial scale. Attempts have been made to devise a satisfactory process but many problems remain to be solved. The basic principles of continuous processes will be outlined, as well as the major problems in their operation and the progress made in the CSIRO laboratories toward the development of satisfactory industrial techniques. In the metallurgical field ion exchange resins can be used for various applications such as the recovery and concentration of valuable metals from mine waters,' the regeneration of pickling and plating liquors," the prevention of pollution by waste effluents and the recovery of the constituents from them," and the purification of valuable metals such as the rare earths by chromatographic fractionation on columns of ion exchange resins.7,8 . Turther applications undoubtedly will be found in the field of hydrometallurgy where the use of ion exchange resins would enable direct extraction of the desired metal ion from the filtered leach liquor or the leach pulp. For example, an ion exchange process has been described recently for the extraction of gold from a cyanide leach pulp." A continuous process would have advantages in many applications over the usual process employing a fixed bed and intermittent cycle. In a recovery process, it would yield a product stream of steady purity and concentration, it would waste less water in rinsing, and if the contacting apparatus were efficient less resin would be used, since each portion of the resin would be cycled as soon as it was loaded instead of lying idle until the whole bed was ready for regeneration. A very major advantage is that it would be simpler to control automatically. It is probable that continuous operation will be the key for really large scale applications of ion exchange. The flow sheet of a continuous ion exchange recovery-concentration process is illustrated diagrammatically in Fig. 1. Dilute liquor containing the valuable ion flows through the stripping section countercurrently to a moving bed of resin and leaves after a final contact with freshly regenerated resin. The resin leaves the unit almost in equilibrium with the incoming liquor and then flows to the regenerating unit where it is treated by a slow countercurrent flow of concentrated regenerant solution. The adsorbed ion is displaced from the resin and appears in the concentrated product stream. The resin then must pass through a rinse unit or section where regenerant entrained by the resin is washed back into the regeneration section by water. The regenerated and washed resin is then recycled back to the stripping section. I. Theoretical Operating Behavior of Continuous Ion Exchange Stripping System The simple theory of continuous ion exchange is analogous to that of solvent extraction and other diffusional transfer operations and is governed by the equilibrium relationship, the mass balance, the rates of mass transfer, and the contacting efficiency of the unit. Equilibrium Relationship—The relative affinity of two ions A and B, for a particular resin immersed in their solution, can be expressed by plotting compositions of the solution against compositions which exist in resin in equilibrium with those solutions, i.e. C/Co vs q/a where C, is the total normality of the solution, C is the normality of ion A in the solution, a is the total exchange capacity of the resin in gram equivalents
Jan 1, 1956
-
Institute of Metals Division - Solidification of Lead-Tin Alloy DropletsBy D. Turnbull, J. H. Hollomon
THERE is a large body of evidence'" indicating that solidification during the liquid-solid transition is usually induced by heterogeneities present in the liquid. By dispersing liquid metals into small droplets, the impurities responsible for catalyzing solidification are isolated within a small number of these droplets. The effect of the foreign body therefore is restricted to a single drop by this technique. Thus upon cooling below the melting temperature, solidification is initiated by homogeneous nucleation in the majority of the droplets that do not contain impurities. In the case of solidification of liquid metals, the activation energy for nucleation is so great that its rate changes by orders of magnitude for a change in temperature of only several degrees centigrade.' Effectively homogeneous nucleation occurs at a critical temperature upon continuous cooling. Thus by microscopic observation of single particles during cooling, a temperature at which the rate of homogeneous nucleation becomes sensible can be determined.3 since at the temperatures at which nucleation occurs in the absence of impurities the rate of crystal growth is extremely rapid, the temperature at which the entire particle solidifies is very nearly the temperature at which the nucleation of the solidification occurs. Thus for liquids that freeze at high temperatures the onset of nucleation can be established by simply observing the temperature at which the marked heat evolution and increase in brightness of the particle occur. For liquids that freeze at lower temperatures the onset of nucleation can be determined by a rumpling and change in shape of the particle resulting from its solidification. The microscopic technique for observing the solidification of small particles has already been described." In earlier papers the nucleation of solidification of pure metals 5,6 and of alloy systems7 showing complete liquid and solid solubility have been described. In the present paper, the observations are extended to a simple eutectic system (Pb-Sn) where the possibility of the formation of two solid phases exists. Metals for the investigation were obtained from the American Smelting and Refining Co. in the form of pure lead and pure tin, 99.8 and 99.9 pct purity, respectively. An ingot of each of the pure metals was made into shot by heating the metals at a temperature about 50 °C in excess of the melting point and pouring the liquid slowly into a container of water at 15°C. Samples of the shotted pure metals were weighed out to make alloys containing 5, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80, and 90 atomic pct Pb. Samples of each alloy were then melted in separate beakers. Each melt was poured through a pyrex funnel into a cylindrical mold (% in. ID). The casting solidified in 10 to 20 sec. The inside of the mold as well as the funnel through which the metal was poured were coated with graphite to eliminate adherence of the metal. Analyses were performed on some of the compositions and are given in Table I. The compositions also were checked for these samples and for those that were not analyzed by determining the spread between the liquidus and the solidus upon melting the small metal particles. These measurements agreed as well with the nominal compositions as the analyses listed above. Results The results of the supercooling experiments for the several alloys are summarized in Table II and plotted on the constitution diagram in Fig. 1. Data for the pure lead and pure tin were taken from earlier investigations. The values for the maximum supercooling of the several alloys are the average of several determinations on a number of drops of each alloy. The maximum value in any determination was within about 2 pct of the average. For the alloys containing from 20 to 60 atomic pct Sn, inclusive, two marked changes of the surface structure were observed upon cooling. At the higher temperature, after the first appearance of the solid phase it continued to grow slowly at a constant temperature and then stopped. At the lower temperature the alteration of surface structure was abrupt. For the alloys containing from 70 to 95 atomic pct Sn, inclusive, an abrupt change in surface structure was observed at a single critical temperature.
Jan 1, 1952
-
Coal - Comparative Effectiveness of Coal Cleaning EquipmentBy Orville R. Lyons
This paper presents a method whereby the amount of misplaced material and the difficulty of the separation can be used to compare coal cleaning equipment of all types, from effectiveness and capacity standpoints. The correlations presented do not include all types of equipment currently available, but the method can be used to evaluate any make or type of coal cleaning equipment, both old and new. THE relative performance of coal washing equipment, or the effectiveness with which any type or make of equipment removes impurities from coal, has been most difficult to evaluate in the past. The most widely used yardstick is the Frazer and Yancey efficiency formula developed in 1922,' but Yancey in a later article states that "washers treating coals of different density composition or operating at different densities of separation cannot be compared directly on the basis of this criterion."' Prior to and since 1922, a variety of other methods has been used for comparison purposes, including the distribution curve, the error area, and the "ecart probable" or probable error. Yancey and Geer in discussing these methods conclude, "Performance can be evaluated in a number of different ways, with the choice of the proper method to use being dictated by the objectives of the investigation and the data available."' It is true that performance can be evaluated in a variety of ways, but if the equipment is to be evaluated on an effectiveness basis, there should be only one universal comparison method. Varying methods have been used because one universal comparison method has not been found or developed. In the article previously quoted, Yancey and Geer state in clear terms the primary concept for a universal comparison method: "One of the simplest, and certainly one of the most obvious evaluations of washery performance is the quantity of sink material in the washed coal and the float material in the refuse. If the washery products are tested at the density at which the washing unit is operated, the sink in the washed coal and the float in the refuse represent material that has been misplaced." The quantity of misplaced material was used as a criterion of washery performance by Lincoln in 1913," by the United States Bureau of Mines in 1938,' by Hancock in 1947," and by the national French research agency Cerchar in recent years.' In 1950 Andersone proposed the use of this criterion as an efficiency value to replace the Frazer and Yancey formula. However, none of the above-mentioned investigators used the misplaced material concept in a manner that would provide universal coal-cleaning equipment comparisons. The Correlation Theory The ideal coal cleaning process would treat all sizes and would make a perfect separation at any given specific gravity. All material lower in density than the desired value would report in the coal product and all material higher in density would report in the refuse product. Unfortunately, no known cleaning process achieves this goal and there seems little likelihood that any process yet to be invented will do more than approach it. When coal is treated in volume under operating conditions, it is impossible to avoid mechanical entrapment, fluctuations in throughput and effective gravity of separation, and the creation of turbulent currents, even when a true heavy-liquid bath is used and the feed is closely sized and contains little intermediate gravity material. This being so, it is possible to appreciate the difficulties inherent in trying to obtain a perfect separation when treating a wide range of sizes and a feed containing high percentages of intermediate material, using turbulent currents to help create the effective separation gravity, under operating conditions which normally tend to be on the overload side. When coal is separated from refuse in any coal cleaning equipment, some refuse always reports to the coal and some coal to the refuse; the writer therefore assumed that there should be a relationship between the total amount of misplaced material produced by any given piece of equipment and the difficulty of separation as represented by the percentage of near gravity material in the feed. With small amounts of near gravity or k0.1 material in the feed there should be less misplacement of material than would occur with large amounts of near
Jan 1, 1953
-
Coal - Comparative Effectiveness of Coal Cleaning EquipmentBy Orville R. Lyons
This paper presents a method whereby the amount of misplaced material and the difficulty of the separation can be used to compare coal cleaning equipment of all types, from effectiveness and capacity standpoints. The correlations presented do not include all types of equipment currently available, but the method can be used to evaluate any make or type of coal cleaning equipment, both old and new. THE relative performance of coal washing equipment, or the effectiveness with which any type or make of equipment removes impurities from coal, has been most difficult to evaluate in the past. The most widely used yardstick is the Frazer and Yancey efficiency formula developed in 1922,' but Yancey in a later article states that "washers treating coals of different density composition or operating at different densities of separation cannot be compared directly on the basis of this criterion."' Prior to and since 1922, a variety of other methods has been used for comparison purposes, including the distribution curve, the error area, and the "ecart probable" or probable error. Yancey and Geer in discussing these methods conclude, "Performance can be evaluated in a number of different ways, with the choice of the proper method to use being dictated by the objectives of the investigation and the data available."' It is true that performance can be evaluated in a variety of ways, but if the equipment is to be evaluated on an effectiveness basis, there should be only one universal comparison method. Varying methods have been used because one universal comparison method has not been found or developed. In the article previously quoted, Yancey and Geer state in clear terms the primary concept for a universal comparison method: "One of the simplest, and certainly one of the most obvious evaluations of washery performance is the quantity of sink material in the washed coal and the float material in the refuse. If the washery products are tested at the density at which the washing unit is operated, the sink in the washed coal and the float in the refuse represent material that has been misplaced." The quantity of misplaced material was used as a criterion of washery performance by Lincoln in 1913," by the United States Bureau of Mines in 1938,' by Hancock in 1947," and by the national French research agency Cerchar in recent years.' In 1950 Andersone proposed the use of this criterion as an efficiency value to replace the Frazer and Yancey formula. However, none of the above-mentioned investigators used the misplaced material concept in a manner that would provide universal coal-cleaning equipment comparisons. The Correlation Theory The ideal coal cleaning process would treat all sizes and would make a perfect separation at any given specific gravity. All material lower in density than the desired value would report in the coal product and all material higher in density would report in the refuse product. Unfortunately, no known cleaning process achieves this goal and there seems little likelihood that any process yet to be invented will do more than approach it. When coal is treated in volume under operating conditions, it is impossible to avoid mechanical entrapment, fluctuations in throughput and effective gravity of separation, and the creation of turbulent currents, even when a true heavy-liquid bath is used and the feed is closely sized and contains little intermediate gravity material. This being so, it is possible to appreciate the difficulties inherent in trying to obtain a perfect separation when treating a wide range of sizes and a feed containing high percentages of intermediate material, using turbulent currents to help create the effective separation gravity, under operating conditions which normally tend to be on the overload side. When coal is separated from refuse in any coal cleaning equipment, some refuse always reports to the coal and some coal to the refuse; the writer therefore assumed that there should be a relationship between the total amount of misplaced material produced by any given piece of equipment and the difficulty of separation as represented by the percentage of near gravity material in the feed. With small amounts of near gravity or k0.1 material in the feed there should be less misplacement of material than would occur with large amounts of near
Jan 1, 1953
-
Institute of Metals Division - Cleavage Steps on Zinc Monocrystals: Their Origins and PatternsBy J. J. Gilman
Examination showed that characteristic cleavage step patterns are observed on the cleavage surfaces of undeformed, slipped, bent, twinned, compressed, and indented zinc crystals; and the effect of temperature is discussed. Dimples were seen to produce cleavage steps in a treelike pattern in otherwise undeformed crystals. The steps seem to originate when cracks intersect screw dislocations. IT has been known for a long time that the path of fracture in polycrystals may be discontinuous (see Jaffe, Reed, and Mannl for review). Recently, Kies, Sullivan, and Irwin2 have proposed, and given evidence, that crack propagation is discontinuous within individual crystals as well. Other evidence has been given by Low.' When discontinuous cracks within a crystal join together to make a macrocrack, the lamellae between each set of two cracks are torn somewhere, forming small cliffs. These cliffs appear as lines when the cleavage surface is observed microscopically.4,5 The lines have been called vein, tree, and riverlike markings by various authors, and they have sometimes been mistaken for fissures. The descriptive term cleavage steps is used in this paper. Cleavage steps vary in height over a wide range of values, from molecular dimensionsG to lor. and larger. Kies, Sullivan, and Irwin,2 as well as George,' have shown that the gross cleavage step patterns for plastics, polycrystalline metals, and for mono-crystals are sometimes similar. Thus, they depend mostly on the mechanical variables that prevail during cleavage and are relatively insensitive to the structure of the material. For example, parabolic markings2,7,8 sometimes result when cracks open up ahead of, and not coplanar with, the main crack front. If the advance crack has the same velocity as the main crack, their intersection line is a parabola, otherwise it is a hyperbola or an ellipse. The patterns are strongly affected by differences in crack velocities. This results in chevron patterns which point to the place of origin of the main crack. It is the purpose of this paper to demonstrate the existence of a mechanism of cleavage step formation which is a continuous rather than a discontinuous process. Also, certain characteristic step patterns are described, and the strong effect of temperature is shown. The specimens were zinc monocrystals (grown from 99.999+ pct pure metal). These were cleaved at room temperature and at — 196°C. Results and Discussion Cleavage step patterns are highly variable from point to point on a given specimen, as well as from one specimen to another. Although the patterns shown in the photographs are typical, they have been selected for graphic illustration. Figs. la and lb compare undeformed crystals that were cleaved at —196 °C and room temperature, respectively. Cleavage at room temperature (Fig. lb) resulted in a higher density of high steps (dark black lines) and enhanced the visibility of the fine background markings. Deformation by simple slip caused no marked change in the step patterns until the glide strain reached about 1.0. But, as Fig. lc shows, the density of high cleavage steps was greatly increased by large glide strains. Corrugations lying perpendicular to the slip direction may also be seen in Fig. lc. These are caused by deformation bands. The cleavage resistance of the crystal of Fig. lc was very high compared to undeformed crystals (estimated by the force on a needle required for cleavage). Striking and varied cleavage step patterns were observed on bent crystals. Two characteristic patterns that were observed on crystals bent at 25°C, and cleaved by reverse bending at —196°C, are shown in Figs. 2a and 2b. The first, Fig. 2a, consists of V-shaped lines similar to the parabolas of other materials2,7 Fig. 2b shows a pattern that is the equivalent of Fig. la, consisting of faint background lines with a few higher step markings. Cleavage of bent crystals at room temperature resulted in Figs. 2c and 2d. Now, the cleavage step lines show a strong tendency to follow one of two perpendicular paths. In Fig. 2c (bent once), many of the cleavage step components that lie parallel to the bend axis are assembled into irregular lines. In Fig. 2d (bent twice), the cleavage steps again tend to consist of two perpendicular components, but neither of the components is assembled into lines. Also, the step density is higher.
Jan 1, 1956
-
Industrial Minerals - Economic Aspects of Ground Water in FloridaBy V. T. Stringfield, H. H. Cooper
ONE of the earliest investigations of ground water in Florida was made in 1513 when Ponce de Leon arrived at St. Augustine in search of the Fountain of Youth. The history of the development of the water resources of the State shows that the large artesian reservoir that underlies Florida was discovered in the latter part of the last century. Part of that history is given by L. C. Johnson' who states that the first successful artesian well in Florida was drilled at St. Augustine between 1880 and 1882. After the City of Jacksonville failed to obtain a flow of artesian water at a depth of nearly 400 ft and abandoned the drilling, R. N. Ellis, City Engineer, and L. C. Johnson, using Johnson's knowledge of the geologic structure of the artesian reservoir, estimated correctly that artesian water could be obtained at a depth of about 500 ft. Thus began the development of the large artesian system in the northeastern part of the State. In 1908, the Florida Geological Survey issued its first report on the ground water of central Florida. More recent reports give the results of investigations that have been in progress in cooperation between the Florida Geological Survey and the U. S. Geological Survey since 1930. As a result, the ground-water geology and hydrology of Florida are now so well known that ground-water problems such as confronted early investigators no longer exist. However, new problems arise and new discoveries are made as the demand for more ground water increases with the development of the State. Water-Bearing Formations Descriptions of the geologic formations and a map showing their distribution at the surface are given in a recent report by Cooke.' The geologic formations that yield the ground-water supplies in Florida represent only a small part (about 1000 ft) of the total thickness of the sedimentary rocks (more than 15,000 ft) that underlie Florida. The water-bearing rocks that yield fresh water include more than two dozen formations that range in age from Eocene to recent. The Eocene formations, which consist chiefly of limestone, are the oldest and the deepest of the formations in Florida that yield fresh water to wells. In 1944, on the basis of a study of the foraminifera, the Applins -ivided the limestone of Eocene age into six formations, which are as follows, from top to bottom: Age or group Formation Jackson Ocala limestone icAvon Park limestone Claiborne -j Tallahassee limestone aLake City limestone Wilcox Salt Mountain limestone xOldsmar limestone The Ocala limestone and the underlying formations of Claiborne age, along with some of the overlying limestone of Oligocene and Miocene age, constitute the principal source of water in Florida and southeastern Georgia and generally may be regarded as forming one artesian aquifer or water-bearing unit. This aquifer is referred to in this paper as the Floridan aquifer, a name proposed by Parker.' In part of Seminole County alone more than 1000 flowing wells yield water from the Floridan aquifer. A yield of 6500 gpm, or about 9.5 million gal per day, by natural flow from a well penetrating the aquifer at Jacksonville, Florida, was observed in 1942. A yield of about 7000 gpm by natural flow was reported for a well 1390 ft deep at St. Augustine in 1887. The largest yield reported from a pumped well penetrating the aquifer is 7500 gpm, or about 10.5 million gal per day. The aquifer is also the source of some of the large springs, such as Silver Springs, whose discharge, according to measurements made by the U. S. Geological Survey, has ranged from 526 to 1350 sec-ft, or from 340 to 872 million gal per day. The top of the Ocala limestone, as represented by contours in Fig. 1, indicates in a general way the geologic structure of the formations that comprise the Floridan aquifer. The Ocala is at or near the surface in the areas represented by shading in the northwestern part of the peninsula and also in an area in western Florida adjacent to Alabama and
Jan 1, 1952
-
Extractive Metallurgy Division - Vacuum Dezincing of Desilverized Lead BullionBy T. R. A. Gokcen
THE possibilities of separating and purifying metals by high vacuum distillation were examined by Kroll.1 He suggested vacuum treatment for the removal of zinc from the lead produced after Parkes desilverizing. The St. Joseph Lead Co. developed the first commercial vacuum dezincing process at their Herculaneum refinery, as described by Isbell.² The Broken Hill Associated Smelters at Port Pirie, Australia, has, after several years of pilot plant operations and fundamental investigations, developed a continuous process, which will be described briefly in a forthcoming publication." As the full-scale continuous vacuum dezincing plant at Port Pirie is still experimental, publication of full practical details of the plant will be deferred until the unit is operating as a normal part of the continuous refinery. This paper deals only with theoretical aspects of vacuum distillation processes, with particular reference to vacuum dezincing. The method of mathematical analysis is of general interest as it may be applicable to other metallurgical separations which have been investigated recently.4-6 Evaporation Processes At about atmospheric pressure, or higher, most liquids possess a boiling point—a temperature at which any heat put into the liquid is absorbed only as latent heat, not as specific heat. If a steady heat input is supplied, the liquid's temperature rises to this value, then remains constant while bubbles of vapor form beneath the surface. The rate of evaporation is determined solely by the rate of heat transfer to the liquid; the temperature of boiling is determined by the partial pressures of the volatile constituents in the liquid, and the total pressure above the surface. If the rate of heat transfer to the liquid is increased, the temperature remains constant, and the rate of boiling increases. When evaporating metals under vacuum, however, the partial pressures concerned are generally so small that boiling does not occur, because at even a fraction of a millimeter below the surface the hydrostatic pressure is usually too great to permit the formation of a pocket of vapor. In addition, the high thermal conductivity of metals tends to prevent the local superheating which is necessary for bubble formation.' Although this effect is doubtless also exerted when boiling metals at higher pressures, the magnitude will be less because the degree of superheat required to form bubbles is very much less at the higher temperatures involved. Under vacuum, therefore, evaporation of volatile constituents takes place only from the exposed surface, and the rate of evaporation depends upon the surface area, the surface concentration of volatile constituents, the surface temperature, and the partial pressures of volatile constituents immediately above the surface. If the heat input is raised above a certain level, the effect is not to increase the evaporation rate at constant temperature, but to raise the temperature of the liquid until at some higher level an increased rate of evaporation (and thus of latent heat absorption) again balances the heat input rate.? Many substances (including metals) have a very large intrinsic evaporation rate at quite low temperatures—far below their normal boiling points. However, at atmospheric pressure, the large numbers of atoms evaporated are almost completely deflected back into the liquid (or solid) surface by air molecules. Thus the back condensation rate is practically equal to the gross evaporation rate, and the net evaporation rate is practically zero. It can therefore be seen that an overall distillation rate depends not only upon the intrinsic evaporation rate, but also upon the ability of the volatile atoms to move away from the evaporating surface. This movement is facilitated by the provision of a condensing surface close by. Vacuum Distillation: The function of the vacuum above the evaporating surface is to remove foreign molecules, so that the chances of deflection of an evaporated atom back into its source are reduced. When the residual gas pressure is reduced so far that the evaporated atoms have a high probability of reaching a nearby condensing surface without suffering collision with a foreign molecule, the state of affairs is termed "molecular distillation." This process is practiced commercially today for the purification of numerous organic chemical products of high unit value, but not, to the writer's knowledge, for any metallurgical separation. When the degree of vacuum produced in a still is not sufficient to promote molecular distillation, then the evaporated molecules must diffuse through the
Jan 1, 1954
-
Dust Control Using Wet-Type Dust CollectorsBy Bob J. Rawicki
TYPES OF WET DUST COLLECTORS Basically, there are two types of wettype dust collectors. One is mechanical, incorporating pumps, motors, fans, sprays, filters, or flooded beds. These come in many forms, but their operating principle is basically the same. The screen or flooded bed is wetted by a series of sprays. Polluted air is drawn in through a fan; here the dust particles impinge on the screen and are flushed off into a settlement tank. The air then passes through spray eliminators to the atmosphere. A great deal of time and money has been spent developing this type of collector. In laboratory tests where clean water at constant pressure, air at constant volume, and dust at low but constant mass to volume were applied, these collectors gave high percentage dust collection efficiencies. Theoretically and technically, they are very good. In practice, however, under rugged mining conditions where there are no constants, and dust load, particle size, water pressure, and volume of air vary with time, their efficiency varies dramatically. Some of the most common problems are: 1. clogging of screens 2. blocking of nozzles 3. replacement. of damaged pump stators Any of the above result in loss of air flow and dust collection efficiency. Generally, there are so many mechanical moving parts, something is always breaking down. This type of collector is very expensive in parts and labor to maintain. The second type of dust collector, which I personally developed, with the help of the British National Coal Board of Great Britain, has none of the above mentioned problems and it works on a completely different principle. My collector, the Mark III Precipitaire Wet-Type Dust Collector, (an improved version of previously successful models) has no internal moving parts, no flooded beds, screens, spray nozzles, etc. It is highly efficient - 99.8% efficiency on total collected dust and 97% on respirable dust. The pressure drop through the collector is constant, hence, air flow remains constant. There is no maintenance expenditure, be it labor or parts. The only maintenance that there is, is desludging. This, of course, varies with the dust load. With a low dust load, desludqinq may only be required every 4 weeks, but this is a very simple operation that can be handled by unskilled men and it takes only a very short time. My collector is powered by a ventilation fan that is placed on the clean air side for a longer fan and motor life. Dust is extracted solely by water action. The collector's operating principle is: Dust laden air is drawn in along ducting into a tapered scrubber section, shaped to create a self-induced curtain of water. This action washes the dust particles from the air and the collected dust settles into the bottom of the tank. Cleaned air is then exhausted via a series of spray eliminators where it is completely dried and on into the atmosphere. These collectors have been sold in Great Britain, to the National Coal Board, and throughout Europe. They have been working very well for many years now. GENERAL APPLICATIONS Dust is very dangerous; silicosis and black lung kill and disable many workers each year. Many of the mine explosions can be attributed to coal dust. By the use and application of dust collectors, these deaths and disasters can be avoided. There really is no excuse. However good and efficient a dust collector may be, it's positioning and installation arrangement is critical. The collector may remove 99% of dust supplied to it, but it takes the skill and experience of a dust control engineer to design and fit ducting, hoods, etc. to remove polluted air at source and deliver it to the collector. In some cases this is very simple; in the case of longwall machines, it is very difficult. In general, dust collectors should be used to collect dust at every point of dust generation. I have noted a great reliance on dilution as a means of dust control. I personally think this is a dangerous and unreliable method. As mines become deeper and the use of larger and more productive equipment becomes more common, reliance on dilution will be impossible and dust collectors will be essential.
Jan 1, 1982
-
Numerical Simulation Of Fluid Flow In Porous/Fractured MediaBy Bryan J. Travis, Thomas L. Cook
INTRODUCTION Our growing concern for adequate and secure sources of energy and minerals has stimulated vigorous exploration for new sources, research toward a better understanding of geological processes, and development of new extraction technologies. The need for control, or at least prediction, of subsurface fluid flow is important for many of these technologies for example: primary, secondary and tertiary recovery of oil; ground water and waste management; in-situ fossil energy extraction (oil shale, coal, tar sands); and solution mining of uranium, copper and other minerals. These technologies, especially the last two, are characterized by highly complex systems. A partial list of the physical processes occurring would include flow in porous/fractured media, multi-phase and multi-component flow with heat transport, chemically active fluids and soils, tracers, diffusion and dispersion, fracturing and dissolution. A great deal of understanding of how these processes behave and interact can be obtained from models. MODELS Theoretical models are valuable because they: 1) provide a frame of reference for interpreting results of laboratory and field experiments; 2) once validated by experiments, allow a variety of geometries, injection/production strategies, etc., to be examined (at relatively low cost) for efficiency and stability; 3) can provide guidance to the design of experiments and field operations. Most models are based on the fundamental principles of mass, momentum and energy balance. But from this starting point, many paths can be taken. For example, there is the theory of Payatakes (1973) which concentrates on the microscale dynamics. In this approach, the rock or soil matrix is represented by a complex of characteristic channels (such as periodically constricted smooth tubes). Detail of flow within the characteristic channel is calculated very accurately. A difficulty with this model is that description of a representative channel can require several parameters which may not be easily measured. Also, it is not clear how the channels can be combined practically to model a large scale flow. Another type of model is the "global" one described by Bear (1972). Here, the continuum equations for conservation of mass, momentum and energy are averaged over a distribution of pore channels, resulting in a set of conservation equations in which the small scale structure of the medium is replaced by quantities such as porosity, permeability, dispersion coefficients and tortuosity. This approach allows computation of large scale flows. However, additional constitutive equations are needed which relate averaged quantities such as permeability to observables such as local saturation, particle size distributions, and others. An important difference between models is the way they handle the momentum equation. Payatakes' model solves the full equation. In others, it is replaced by a simpler relation such as Darcy's law (valid for slow flow rates) or by Forchheimer's equation which extends Darcy's law to higher flow rates. These simple relations can nevertheless match a great deal of experimental data (Dullien 1975). The permeability term which appears in Darcy's law and in Forchheimer's expression has been related to other quantities such as porosity and particle surface area. The Kozeny-Carmen equation is a well-known example, valid for some ranges of porosity and particle sizes and for some materials. Several other semi-theoretical, semi-empirical formulations have been devised, none of which are entirely satisfactory. One goal of researchers has been the ability to predict permeability of a porous material from basic measurable quantities such as grain size distributions without recourse to adjustable parameters. This effort has proceeded from consideration of distributions of idealized, non-intersecting channels, to intersecting channels, to consideration of both distributed pore radii and pore neck radii. This last approach has been used successfully to predict permeabilities (ranging over several orders of magnitude) in sandstones (Dullien 1975). Additionally, studies on explicit networks of channels (e.g., Fatt 1956) and percolation theory (e.g., Larson et. al. 1981) have been used to examine interconnectivity effects in porous media with the hope of eventually being able to predict permeability accurately. To be of more than theoretical interest, a model must be compared s controlled laboratory conditions where boundary and initial conditions and relevant material properties can be accurately determined. Also, extraneous forces can be eliminated so that the interactions between processes of interest can be clearly seen and flow models can be vigorously tested. In contrast to the well-defined environment and
Jan 1, 1982
-
Minerals Beneficiation - Low-Temperature Carbonization of Lignite and Noncoking Coals in the Entrained State - DiscussionBy G. A. Vissac, R. G. Minet, N. E. Sylvander
R. G. Minet—The authors' description of the remarkable progress made in the last few years in applying the fluidized solids technique to the problem of lignite drying and carbonization clearly demonstrates that engineering techniques available today may make many processes practical and profitable which a few years ago were considered otherwise. As pointed out in the article, the future of the carbonization step hinges on the value and utilization of low temperature tar. On paper, at any rate, this tar looks like a valuable raw material for the chemical industry to use. Some 50 to 60 pct may be converted to pitch for electrodes, roofing, road tars, and other valuable products; the 25 to 30 pct tar acids could conceivably form a basis for a new low cost resin or plastic of the phenolformalde-hyde type. Yet these materials are so new and dissimilar from available sources that much work must be undertaken by the chemical industry before they will be accepted. Now that the work of Dr. Perry and his colleagues has made a large supply of low temperature tar a real possibility, I would expect the chemical industry to accelerate its work in this field. On the basis of certain data available in the literature it appears possible to produce a more aromatic tar, although in smaller yield, by operating at temperatures in the range of 1300" to 1500°F. Operating a fluidized bed process for lignite at these conditions should be technically possible, at least, and could produce a tar having more familiar characteristics. I wonder if Dr. Parry would care to comment on such a possibility. Incidentally, in our own work on carbonizing coking coals in fluidized beds, using Ohio, Pennsylvania, and West Virginia high volatile bituminous coals, we have obtained yields which agree with the correlation given for tar yields vs moisture and ash-free volatile Btu. Our data are slightly under the line, but certainly in the range of the correlation. We have also obtained evidence in support of the authors' statement as to the effect of air on the process. In our pilot plant all the heat required for carbonizing is released by internal combustion of char in the fluidized bed in normal operation. We are also equipped to obtain all carbonization heat by external electric heaters on the shell of the carbonizer while introducing only an inert gas to the process. We note no difference in tar yield or characteristics between the two operations. In the case of the gas, however, it appears that some hydrogen is consumed by the air combustion. We would be interested in hearing in a little more detail about the hot dust and char handling problems at Sandow. Have the authors found char subject to spontaneous combustion? Have they ever tried a coking coal in their pilot plant? V. F. Parry (author's reply)—Production of a more aromatic tar by operation of the carbonizer at 1300" to 1500°F does not appear to be economical. 1) The capacity of a reactor operating at 1500°F would be 30 to 50 pct less than the capacity at 932 °F and the cost of processing would increase materially. 2) The cracking of tar vapors in a reactor requires appreciable time to complete the reactions and it is doubtful that considering the very short time of residence of tar vapors within the reactor (4 to 10 sec after evolution) the basic character of the tar would be changed significantly. This is indicated from the data reported in Table XIV. 3) General studies have shown that it is advisable to operate at the minimum temperature to produce the maximum tar and minimum gas. 4) Operating problems and the maintenance of vessels and reactors, and the hazards of handling hot char, increase with the temperature of carbonization. It is technically possible to operate at temperatures as high as 1500°F, but in my opinion it is not economical or desirable. We believe that the primary tar must be won in the simplest way and then processed alone to change its character for production of desired products. It is interesting to have confirmation of our observations on the reaction of air with the products of carbonization. The major reaction is with the char, fol-
Jan 1, 1957
-
Drilling–Equipment, Methods and Materials - Stresses Caused by Bit Loading at the Center of the HoleBy J. C. Wilhoit, J. B. Cheatham
Although an oil well is a long cylindrical hole with an irregular bottom, it appears likely that the nature of the stress concentration at the bottom of the hole can be ascertained from an analysis of the stresses around a short cylindrical cavity with rounded comers and smooth bottom. Such a cavity is studied primarily because it leads more readily to a solution to the problem by the use of stress functions. In this paper the stress distribution around a short cylindrical cavity subjected to bit loading, overburden and drilling-fluid pressures is determined by means of an analytical solution which approximately satisfies the boundary conditions of the problem. From this solution the stresses at the corner of the hole are calculated to be about 35 per cent lower than comparable results obtained by photoelastic and relaxation analyses. This difference is apparently due to the large radius of curvature at the corner of the cavity in the present analysis. Since good agreement is obtained between the results of this analysis and the stresses calculated for a similar loading on a semi-infinite elastic solid, it is concluded that the bit action in the region near the center of the hole is not appreciably affected by the presence of the sides of the hole. INTRODUCTION Much has been written concerning drilling "under down-hole conditions" and pertaining to the stress distribution in the rock at the bottom of an oil well.1-5 For example, it is known that identical rocks can be drilled more rapidly at the surface than under subsurface conditions of pressure and stress.6 Information on the behavior of rocks under loading can be obtained from triaxial test data.7-9 From such tests it is found that rocks exhibit brittle failure when the confining pressure and pore pressure are equal, but the mode of failure may change to ductile as the difference between the confining pressure and the pore pressure is increased. Brittle failure implies that there is very little permanent deformation before fracture, whereas ductile failure indicates that permanent deformation takes place before fracture. Some rocks are ductile at differential pressures of 5,000 psi , but other rocks are brittle even at differential pressures of more than 50,000 psi. Cuttings embedded in mud at the bottom of the hole may act as a plastic mass which the bit teeth must penetrate in order to attack the virgin rock below. 10 During drilling, the mud pressure acts as the confing pressure, and in many cases the difference between the mud pressure and the formation pore pressure IS sufficiently low so that the rock most likely falls in a brittle manner. Very little penetratiori by the bit teeth may be required for brittle failure of the rock. For these brittle materials, the elastic stress distribution is of practical interest in determicing the possible effects of a given loading on failure. The penetration of bit teeth into ductile or plastic rock has been analyzed previously ll,12 and will not be considered further in the present work. During actual drilling, many teeth act at various points ovel an irregular hole bottom. In the present analysis a solution is obtained for an idealized problem of determining the elastic stress distribution caused by only one tooth acting alone at the center of a smooth hole bottom. It obviously is not proposed that any driller should put a special one-tooth bit on bottom, but it is hoped that extensions of the simpler problems can eventually lead to a better understanding of more complex actual drilling phenomena. To obtain a solution by the use of stress functions, a short cylindrical cavity with rounded corners and smooth bottom is studied. The solution to this problem should give an insight into the nature of the stress concentration at the bottom of an oil well which is in reality a long cylindrical hole with an irregularr bottom. After selection of the proper curvilinear co-ordinate system, the stress functions are expressed as series of Legendre polynomials. The coefficient of each term in the series is then determined to satisfy the boundary conditions at a finite number of discrete points on the boundary of the cavity with least square error. Results are compared with bottom-hole stresses obtained by photoelastic and relaxation analyses.
-
Institute of Metals Division - Co-Rich Intermediate Phases in the Cb-Co SystemBy Shozo Saito, P. A. Beck
Mettrllographic and X-ray diffraction study of Cb-Co alloys in the Composition range of 7 to 33 nt. pct Cb, after annealing at 1175 °', showed that near 25 al. pct Cb on MgNi,-lype hexagonal Laves Phaseexists in a narrow composition range, and that an MgCl- type cubic Laves Phase occurs between approximately 27 and 32.6 ut. pd Cb. Lattice parameter and density measures indicated that, in both phases. the deviations from proper Loves sloichimetry result from the substitution of cobalt atoms for some 0f the columbium atoms. Tlle same phases occurant 1000'C, together with an additional phase of unknown structure, which appears between the hexagonal Laves phase and the cobalt-bose terminal solid solutions KOSTER and Schmid' identified a phase corresponding to the composition VCo,, and more recently the crystal structure of this phase has been determined.' In the Ta-Co system Korchynsky and Fountain recently found two intermediate phases at the composition Taco,, one of them metastable. In contrast to the V-Co and the Ta-Co systems, in the Cb-Co system no intermediate phase has been found4 at the composition CbCo,. The present investigation was undertaken in order to reexamine the question of the existence of such a phase. EXPERIMENTAL PROCEDURE Twelve alloys were prepared by arc-melting in a water-cooled copper crucible under helium atmosphere. Electrolytic cobalt and 99.9 pct pure colum-bium powder have been used as starting materials. It was found that melting losses can be reduced by compressing the colunlbium powder in the form of thin pellets before melting. For the alloys used the melting losses were not higher than 1 pct. The intended compositions of all alloys and the chemical analyses of three of them are given in Table I. Specimens from all alloys were annealed in evac- uated fused silica tubes at 1175°C for 3 days and quenched in cold water. A second set of specimens of most alloys was annealed at 1000°C for 7 days and then quenched in cold water. The annealed and quenched specimens were examined metallographically, using the following etchant: 60 pct glycerine +20 pct H,NO, + 10 pct HF + 10 pct water. Powder specimens for X-ray diffraction were prepared by crushing annealed solid specimens in a mortar. Alloys containing 27.3, 28, 29.7, and 32.6 at. pct Cb annealed at either 1175" or 1000°C were very brittle and it was found unnecessary to reanneal the powders. However, alloys containing 24.8 at. pct Cb, or less, especially those annealed at 1000°C, were much less brittle and re-annealing was required to remove the strains present in the crushed powders. In each case reannealing was done at the same temperature at which the corresponding solid specimens were annealed and it, too, was followed by quenching in cold water. In many instances X-ray diffraction patterns were also taken of the polished and etched solid specimens and compared with the corresponding X-ray diffraction patterns obtained with powders. The X-ray diffraction patterns were taken with an asymmetrical focusing camera, using CrK radiation. For precision lattice parameter measurements some X-ray diffraction patterns were taken with a symmetrical focusing camera, again using CrK radiation. EXPERIMENTAL RESULTS A microscopic examination of the alloys annealed at 1175°C revealed that the alloy containing 25.5 at. pct Cb was composed of a single phase, but that the 24.8 at. pct Cb alloy did contain a small amount of a second phase, identified by means of X-ray diffraction as having a fcc structure, undoubtedly the terminal solid solution based on cobalt. Apart from the few very weak lines corresponding to this minor phase, the X-ray diffraction patterns of these alloys could be well interpreted in terms of a MgNi,-type hexagonal Laves phase structure. The indexing of the X-ray diffraction pattern, Table 11, yas based on the lattice parameter values a, = 4.740A and c,
Jan 1, 1961
-
Part X – October 1968 - Papers - Enthalpy of Formation of CaMg2By J. F. Smith, J. E. Davison
A value for the enthalpy of formation of z2 of -3.14 i 0.21 kcal per g-atom has been measured by the technique of acid solution calorimetry. This result is in quite good agreement with two earlier determinations by tin solution calorimetry and by direct reaction caloriinetry, and averaging of values determined from the three independent calorimetric techniques gives enhanced precision and accuracy with AHh8 (CaMgZ) = - 3.15 i 0.05 kcal per g-atom. For comparison with experimental data, values for the enthalpies of formation of CaMgz, SrMgz, and BaMgz of -9.8, -7.9, and -2.8 kcal per g-atom were estimated from a calculation based on the LVigizer-Seitz approximation as modified by Raimes for polyvalent elements. While complete quantitative accord between these calculated talues and available experimental data is lacking, nonetheless numerical accord is better than might be expected and, more importantly, parallel numerical trends are observed between experimental and calculated vnlues. WITHIN the past decade the enthalpy of formation of CaMg, has been determined a) from measurement of magnesium vapor pressures over binary Ca-Mg alloys,' b) by solution calorimetry with liquid tin as the solvent,' c) from measurement of hydrogen vapor pressures over ternary alloys of calcium, magnesium, and hydrogen,3 and d) by direct reaction alorimetr. The value from tin solution calorimetry is the most precise and is probably the most reliable, and this value is within the quoted uncertainties of the other three experimental results. The overall agreement among the four independent investigations is quite good, particularly so when the diversity of techniques is noted. On the basis of this agreement, CaMgz was chosen as a test material to evaluate the operation of a newly constructed apparatus for the determination of enthalpies of formation of intermetallic phases by acid solution calorimetry. This was believed to be a severe test because of the high chemical reactivity of both calcium and magnesium which reactivity presumably accounts for the fact that an early determination5 of the enthalpies of formation of Ca-Mg alloys by acid solution calorimetry yielded values significantly more negative than the four recent determinations. EXPERIMENTAL APPARATUS AND MATERIALS Experimental Apparatus. The enthalpy of formation of CaMg, was determined by measuring the difference between the heat evolved when dissolving the metallic compound and the heat evolved when dissolving equivalent amounts of unreacted metallic elements in hydro- chloric acid. This was done differentially with an apparatus consisting of twin calorimeters which were constructed to be as nearly identical as possible. The advantage of differential calorimetry is that systematic errors arising from the individual calorimeter design tend to cancel. A schematic representation of the apparatus is shown in Fig. 1. A dead air space around both calorimeters was provided by a large, thermally insulated jacket. Each calorimeter consisted of a 2-liter Dewar flask which was completely enclosed in a copper container. Each Dewar contained 1600 g of 2.5hr HCl to act as the solvent, and thermal effects resulting from solvent evaporation were minimized by covering the acid with 50 g of mineral oil. There was no detectable reaction between the acid and the mineral oil. Equivalent amounts of mechanical energy were added to the calorimeters through twin stirring rods which were driven at the same rpm by a single motor with the intent of the stirring being to maintain thermal equilibrium throughout the solvent. To calibrate the heat capacities of the calorimeters, known amounts of electrical energy could be added by passing measured voltages and currents for known times through submerged heaters, approximately 20 ohms, which were wound noninductively from Manganin wire. A 6-v storage battery was used as a power source, and a dummy heater was used as an exercise circuit to allow the battery to stabilize at a constant electromotive force before energizing one or the other of the calorimetric heaters. A type K-2 potentiometer was used to measure the potential drop across an energized heater while the current was determined from the potential drop across an external standard resistor. Times of energization were measured with an electric timer, and the electrical energy supplied to a heater
Jan 1, 1969
-
Coal - Air Pollution and the Coal IndustryBy H. Pew, J. H. Field
To alleviate pollution more restrictive legislation is being enacted, either limiting emission of pollutants or the type of fuel that can be utilized. The nature and magnitude of air pollution problems affecting the mining, preparation, coking and combustion of coal are described. Methods for combatting particulate emissions by use of mechanical separators and electrostatic pre-cipitors are discussed. Proposed methods to meet the problem of gaseous emissions currently receiving considerable attention are described, with special emphasis on methods to decrease pollution by sulfur oxides. Concern about air pollution goes back several centuries, but until very recently most effort has been aimed at coal smoke and other visible pollutants. The classic example of a 'successful' campaign for smoke abatement and control is the fruitful combined effort of the city of Pittsburgh and its surrounding Allegheny County, which eventually led to the reconstruction of downtown Pittsburgh at an estimated cost of one billion dollars. Historically, the city's downtown Golden Triangle district had been afflicted by pollutants evolving from steel mills, from a variety of other industries, and from railroad locomotives. Efforts to alleviate the situation prior to 1943 were virtually ineffective. In 1945, however, a comprehensive redevelopment plan was prepared and backed by state authority. Within a few years a clean, modern metropolis has evolved where once stood America's famous 'smoky city.' But the victory in Pittsburgh, as in various other American cities, has not solved the national problem. Current estimates indicate that 133 million tons annually of air pollutants from all sources still are emitted annually into the atmosphere above the United States. About 10% of this annual effluent is particulate matter so that most of the remaining pollution problems will be solved only when other effluents are reduced. Essentially, these are sulfur oxides, nitro- gen oxides, hydrocarbons, and carbon monoxide. Over the years, both states and local communities have tended to increase the restrictions on smoke and fly ash — problems mostly of concern in the combustion of coal. Prior to the middle 1950's, ordinances sometimes permitted emissions of smoke equivalent to as much as No. 3 on the Ringlemann scale. Since 1956, no ordinance has been passed which allows smoke of greater than No. 2. Under today's conditions of improved fuels, equipment and practice, a few communities have passed laws prohibiting emission of smoke of any density darker than Ringlemann No. 1. The majority of existing laws on fly ash emission in the U.S. limit emissions equivalent to 0.85 lb of fly ash per 1000 Ib of flue gas. In recent years, however, regulations which have been adopted give cognizance to the higher level of performance now obtainable with improved equipment. A comparison of the restrictions of five codes adopted since 1960 is given in Table I. The most stringent of these is the one for New York City which provides for a maximum emission of 0.6 lb fly ash per million Btu heat release (equivalent to roughly 0.51 lb/1000 lb of flue gas). The first comprehensive effort to restrict the emission of SO2 resulted from the passage of a 1937 law in St. Louis. This regulation stipulated that coal containing in excess of 23% volatile matter and 2% sulfur must be washed, thereby presumably producing some effective reduction in the input sulfur content. This was followed in 1949 by a Los Angeles County law which prohibited the emission of SO2 in concentrations greater than 0.2%. Most SO2-restrictive legislation passed since that date has been based on this limiting 0.2% SO2 by volume, although modifications are occasionally permitted under selected conditions, sometimes based on the fact that certain limiting ground concentrations are not exceeded — such as in the rules adopted by the San Francisco Bay area. To date, no legislation has been passed in the U.S. to limit the generation of nitrogen oxides from the combustion of fossil fuels. However, such oxides are considered to be of potential importance in air pollution control because of their possible detrimental effects on health and their reported role in the formation of photochemical smog. Interest in reducing oxides of nitrogen from powerplant and auto exhausts is increasing and regulations limiting their quantity can be expected in the future.
Jan 1, 1968
-
Drilling- Equipment, Methods and Materials - Crossflow and Impact Under Jet BitsBy R. H. McLean
Jet impingement produces two mechanisms to clean the bottom of a borehole during jet-bit drilling operations. One is an impact-pressure wave in the immediate area of jet impingement. The other is crossflow, which spreads across the bottom away from the pressure wave. Measurements of the vertical distribution of the cross-flow at several positions beneath a 4 3/4-in. tricone jet bit in a laboratory model show that the maximum velocity in the crossflow is directly proportional to the square root of the product of the rate of flow and the velocity of the jets through the nozzles (QV). Other measurements in the laboratory model show that gradients in the impact-pressure wave are directly proportional to the jet QV if the diameters of the nozzles are held constant. Consistency of the basic relationships through changes in the Reynolds numbers of the jets from 31,000 to 93,000 suggests that the laboratory results should be representative of typical flow in field operations having higher Reynolds numbers. Estimates of the impact-pressure waves under conditions other than those in the jet-bit model may be made by comparison with a less complex model—impingement of a jet against an unrestricted flat surface normal to the jet. Equations derived from current jet technology describe the wave in this simple system for a wide range of conditions. INTRODUCTION The efficiency of rotary drilling operations is strongly linked to the efficiency with which cuttings are removed. Cuttings remaining on the bottom of the borehole will impede further penetration by the bit until they are removed. Wasteful regrinding is prevented if the fluid circulated past the bit removes cuttings as rapidly as they are made. Most investigations of bottom-hole cleaning under bits have been concerned with the rate-of-penetration effects of bit type, nozzle configuration, fluid properties, pressure gradients and utilization of hydraulic horsepower. These investigations have shown that drilling muds, weighted to keep the pressure in the borehole greater than in adjacent formations and carrying material which forms a filter cake, apparently plaster cuttings against the bottom, making them difficult to remove. In addition to a demonstration of the factors causing poor cleaning, the most significant result of these previous investigations has been the increased use of jet bits accompanied by better hydraulic programs and better selection of drilling fluids. Undoubtedly, these practices have improved bottom-hole cleaning, but rates of penetration when drilling with muds are still often lower than would be expected with perfect cleaning." Although there is general agreement with the concept that increasing hydraulic energy at a jet bit often increases the rate of penetration by improving cleaning, research workers have differed as to the best criterion for evalyating hydraulics. Some lean toward maximizing the product of the rate of flow and the velocity through the nozzles (QV) others prefer to maximize the hydraulic horsepower at the bit;'"-' "till another says that neither criterion is universally suitable."' This controversy shows that conclusive field evidence has not established the superiority of either of these criteria. Analysis of the fluid flow which cleans the bottom should contribute to resolving the controversy. Impinging jets clean the bottom by creating two mechanisms capable of dislodging cuttings. One is the impact from the momentum of the jet and the other is the flaw parallel to the bottom away from the area of impingement. The impact force appears in the form of a pressure wave on the bottom and, in this paper, is called the impact-pressure wave. The flow parallel to the bottom is called crossflow. Fig. 1 illustrates these two mechanisms. This paper describes crossflow and the impact-pressure wave beneath jet bits and relates them to controllable parameters. Through these relations, criteria for use in designing hydraulics to achieve maximum use of the mechanisms of cleaning are derived. PROPERTIES OF CROSSFLOW CONCEPTS DEVELOPED IN PREVIOUS INVESTIGATIONS Concepts of scavenging the bottom by crossflow have been well discussed by Bobo, et al." They examine the means by which crossflow may create forces on cuttings and conclude that the cleaning action of crossflow can be increased by increasing the velocity of the flow stream close to the bottom. Using boundary layer theory, they indicate that a reduction in the viscosity of the drilling
Jan 1, 1965
-
Reservoir Engineering–General - Computer Prediction of Water Drive of Oil and Gas Mixtures Through Irregularly Bounded Porous Media–Three-Phase FlowBy R. V. Higgins, A. J. Leighton
Interest by petroleum engineers in the flow of three phases—oil, gas and water—in irregularly bounded porous media lies mostly in the performance calculation of water floods of reservoirs that have been partially depleted as the result of expansion of much of the originally dissolved gas. The authors present a method to forecast three-phase flow in complex geometry and explain the details by the use of a specific example of a five-spot water flood of a partially depleted stratified reservoir. In this example, the fluid and rock properties of a field given by Prats, et al,' were used. The computed results fit the field performance more accurately than Prats, et al, and Slider.' The time required for the high-speed digital computer to make the calculations, including the contributions from the different layered zones, is about one minute. INTRODUCTION The declining rate of discoveries of new oil fields in the United States makes the recovery of more oil from known reservoirs more attractive than previously. Water flooding has an excellent proved background for recovering additional oil economically. Accordingly, the main interest of this paper is in this type of recovery. In petroleum engineering studies, commercial interest in three-phase flow is mostly in the water flooding of reservoirs in which the oil has been partially produced by the expansion of dissolved gas. When water is pumped into these reservoirs, three-phase flow takes place. Although this paper is concerned with these conditions, the principles involved could be used for other conditions when and where they come to the fore. Many investigators have made contributions to non-empirical forecast methods using basic scientific engineering principles. Several of these used the oil in place at the start of the water flood and the oil remaining after a large quantity of water has passed through a core for key values in their calculations. Recently, Prats, et al,' and Slider have reduced assumptions by adding the third phase—gas—in their forecasting methods. Slider uses the mobilities in the immediate vicinity of inlet and outlet wells as an aid to simulate the resistance effect in the five-spat pattern of the flow of oil, gas and water. Prats, et al. minimize assumptions by using previously determined laboratory data for determining sweep efficiency in the five-spot pattern. In neither of these papers is the saturation profile continuously affected by permeability-saturation curves. Sheldon and Dougherty- recently described a method that employs continuously changing saturation profiles using permeability curves, has a minimum of assumptions and needs no prior sweep efficiency. The Higgins-Leighton method, described in this paper, has all of these features; however, many of the techniques are different from those of Sheldon and Dougherty. The Higgins-Leighton method, tested in May, 1961, is direct and easy to apply and requires very little computer time to calculate a forecast. The short computer time is especially helpful in the study of a reservoir containing many layers of different relative permeabilities. In the Higgins-Leighton method, the individual pressures do not have to be calculated, as the resistance to flow in each cell in the flow pattern is readily determined without the use of any iterative techniques. The saturation and permeability distributions are readily determined. These data and the shape factor, which is measured only once from the potentiometric model when mobility ratio is one, determine the resistance to flow in each cell. THEORY The authors showed in a previous paper4 that, as an aid to calculating performance, the reservoir can be divided into channels using the streamlines of a potentiometric model as a guide. See Fig. 1. This procedure also was used in this paper. The authors also showed that, by treating conduits as approximately one-dimensional and neglecting pressure gradients transverse to the main flow, the Buckley-Leverett equation may be expressed as The principles expressed by this equation are employed extensively in the three-phase flow, as they were in two-phase flow. In the three-phase flow, the channels (the size and shape of which are taken from a potentiometric model)
-
Institute of Metals Division - Metallographic Examination of Beryllium AlloysBy G. K. Manning, M. C. Udy, L. W. Eastwood
Those who have examined beryllium and beryllium-rich alloys under the microscope have noted the results of the difficulties encountered when preparing these materials for examination. Hard constituents are readily chipped and pulled out of the matrix, and the soft ones are easily gouged out or embedded with abrasive or other material. The matrix is easily deformed, which makes it very difficult to remove the effects of scratches. Furthermore, the matrix and some of the constituents are also readily pitted during etching, which makes the structure difficult to develop. The metallographic techniques designed to avoid these difficulties are neither radically new nor necessarily the best possible procedures. They were developed at Battelle to fill immediate requirements as part of a study of the preparation and pouring of beryllium melts as described in a separate paper. A part of the research program on the causes of gas unsoundness in beryllium castings entailed a study of the effects of such gases or gas formers as oxygen or oxides, nitrogen or nitrides, and carbon or carbides. These gases or gas formers, if dissolved in the melt, might be an important factor in the study of unsoundness caused by gas evolution in beryllium. Because it was necessary to be able to distinguish these gases or gas formers, if present as alloy constituents, from metallic phases also present as alloy constituents, a series of alloys was prepared with additions of the various possible metallic and non-metallic constituent formers. It should be emphasized that the constituents referred to as gases or gas formers are important in the study of gases in beryllium only if they form a part of the alloy, that is, if they dissolve in the liquid or solid metal. During the first part of the development of metallographic techniques, the Massachusetts Institute of Technology, unpublished report (TC 3315, Nov., 1945) by Paul Gordon, describing various microconstituents in beryllium, was quite useful. Of particular interest in this report is the confirmation of one of the conclusions drawn from the present investigation, that is, that a distinct oxide phase apparently does not occur in beryllium. The Preparation of Metallographic Specimens GRINDING PROCEDURE The original specimen is either sawed or cut on an abrasive wheel to a convenient size. For best results, the surface to be polished should not be more than about 1/4 in. square, and it is convenient to mount the specimens in bakelite to facilitate handling. Two alternative grinding procedures have been developed. As one alternative, specimens are ground successively on 120-, 240-, and 400-grit, wet-or-dry metallographic discs. The abrasive discs are revolved at 1750 rpm on a conventional pedestal grinder. The coarsest disc (120 grit) is used either wet or dry, and in most instances, it can be eliminated from the procedure. Grinding on the two finer discs (240 and 400 grit) is accompanied by the application of kerosene. The technique consists of holding an oil can, containing kerosene, in one hand and the mounted specimen in the other hand. While the grinding is being done, several drops of kerosene are applied every few seconds close to the center of the disc. Carbon tetrachloride serves at least as well as kerosene as a lubricant; perhaps it is slightly better, but it volatilizes so readily that an additional health hazard is involved. Light oils and water are not satisfactory for the finer grinding operations. The second procedure is perhaps slightly slower than the first and requires a more careful technique, but it eliminates the use of kerosene. The specimen is first ground wet on a 240-grit disc, followed by dry grinding on a 400-grit disc. Finer grinding does not appear to be necessary with either of the two alternative procedures. The pressures used throughout either grinding operation must be extremely light, that is, barely sufficient to hold the specimen in contact with the disc; otherwise, flow and chipping are almost certain to occur. In both procedures, it is essential that the discs be sharp and that they be discarded as they show evidence of becoming dull or loaded. In general, not more than four or five specimens can be ground on a disc before the disc becomes dull enough to warrant discarding it.
Jan 1, 1950
-
Technical Note - Monohydrate Process For Soda Ash From Wyoming TronaBy D. Muraoka
Introduction Soda ash, anhydrous sodium carbonate, is produced from underground trona deposits occurring in the Green River Basin of southwestern Wyoming. Stauffer Chemical Co. of Wyoming, a jointly owned subsidiary of Stauffer Chemical Co. and Rocky Mountain Energy Co., mines the trona and processes it into high-purity soda ash. Stauffer Wyoming has been in operation since 1962 with a current production capacity of about 1.8 Mt (2 million st) per year of soda ash. The refinery has five parallel operating trains using the monohydrate process which is a series of chemical processing unit operations described below. Screening and Crushing The mined trona is brought to the surface in two counterbalancing ore skips. The ore is initially screened with refinery feed (less than peagravel size material) entering the process. Oversize is conveyed to an outdoor stockpile. Ore is recycled from under the stockpile via cone feeders as the refinery needs require. The ore is conveyed to hammer type crushers and screened. The refinery feed enters the process and the oversize is recycled to the crushers. Calcining The first step in refining trona to soda ash is calcining the trona ore. This is accomplished in concurrent flow, rotary kilns that are direct fired with natural gas. [2 (Na CO NaHCO 2H O) Heat 3Na CO + 5H20 + CO 2 3~3~ 2 ~~2 32] The trona ore is fed to the kiln at the burner end and is intimately mixed with the hot exhaust gases from the burner. Direct contact with the flame is avoided. The ore moves down the kiln by gravity and rotation aided by the kiln's internal lifters. The exhaust gases are cleaned to 99 + % particulate removal and discharged to the atmosphere. Dissolving the Calcined Trona The calcined trona is conveyed from the kiln discharge to a rotary drum dissolver where it is mixed with water and weak solution (unsaturated solution of calcined trona in water). The calcined trona is soluble whereas the impurities, primarily shale, are insoluble. The discharge from the dissolver is a slurry containing saturated solution (about 30% dissolved calcined trona in water), some undissolved calcined trona, insoluble impurities, and organic impurities. The dissolver discharge enters a rake classifier that makes the first solid/liquid separation in the process by removing the heavy undissolved material from the saturated solution. This undissolved material is conveyed to a second rotary drum dissolver and mixed with water to dissolve any remaining calcined trona. The secondary dissolver discharges to a second rake classifier that separates the solids from the liquid. The solids are now primarily insoluble impurities with very little unrecovered calcined trona. These wastes are conveyed to evaporation ponds for disposal. The liquid from the secondary rake is recycled to the primary dissolvers' weak solution system. Returning to the primary rake classifier, the saturated solution which contains suspended insoluble muds is pumped to a thickener. Flocculant is added to aid settling of the suspended muds. The flocculated muds are collected at the bottom of the thickener and pumped to vacuum cloth filters. The adhering saturated solution is separated from the muds that remain on the filter cloth. The recovered solution is returned to the dissolvers' weak solution system. The muds are reslurried and pumped to the evaporation ponds for disposal. Saturated solution, containing some suspended insoluble muds along with the organic impurities, overflows the thickener. Filtering the Saturated Solution The saturated solution overflowing the thickener enters a holding tank where it is pumped to a bank of pressure leaf filters. The suspended muds are captured on the filter leaves while the cleaned saturated solution is pumped to polishing filters. The polishing filters are also leaf pressure type, intended to remove any suspended solids that may pass the primary filters. Organic impurities are removed with activated carbon. The purified solution is now free of insoluble muds and has most of the organic impurities removed. Crystallizing and Drying the Monohydrate The purified saturated solution is pumped to evaporative crystallizers. The solution is circulated from the crystallizer vessel through a heat exchanger. Indirect contact with steam in the heat exchanger causes the saturated solution to boil, thus crystallizing the soda ash in its monohydrate form. Slurry crystal density is maintained in the crystallizer by simulta¬neous feed of fresh saturated solution and drawoff of the slurry.
Jan 1, 1986
-
Iron and Steel Division - Twenty-Five More Years of Metallography (Howe Memorial Lecture)By J. R. Vilelia
IN accordance with the custom of this society, we are gathered here, as we have every year since 1924, to honor the memory of the eminent American metallurgist and teacher, Professor Henry Marion Howe. Unlike many of the distinguished metallurgists who have preceded me as a Howe lecturer, I cannot bring to you reminiscences of his personality, for it was not my privilege to be associated with Professor Howe, or to be directly one of his students. Yet, Professor Howe and Professor Albert Sauveur, through the medium of their books, were my first teachers of metallography, as they have been of almost all American metallurgists of my generation. As a teacher, and for many years the acknowledged leader of American metallurgists, he exercised a profound influence in the growth of our science and was held in honor by the men of science of his time. I can speak no words of technical appreciation that will add luster to his fame, for by his prophetic vision, his teachings, and his researches he stands among the immortals in the memory of all metallurgists. In 1926, the third Howe Memorial Lecture was presented by Professor William Campbell' of Columbia University, who entitled it "Twenty-Five Years of Metallography." He took as a starting date for his chronology the turn of the century, which coincided with his arrival from England to work in association with Howe at the Columbia School of Mines. In that informative lecture Professor Campbell enumerated the important advances in metallography achieved during the first quarter of the century, and, it now appears, may have established the custom of reviewing such progress every twenty-five years. The scope of Professor Campbell's lecture was as broad as his metallurgical knowledge, for it embraced a wide portion of the field of metallography, both ferrous and nonferrous. Twenty-five years later, the Howe Memorial Lecture Committee saw fit to assign to me the honor of writing a lecture that would commemorate the work of Henry Marion Howe and would at the same time constitute the 25th anniversary of the lecture by Professor Campbell. The Committee suggested that this lecture might properly be called "Twenty-Five More Years of Metallography," a suggestion that I have adopted. I must confess, however, that I have not followed the precedent established by Campbell and have narrowed the scope of this lecture to an appraisal of those achievements which in my opinion have contributed most to the progress of microscopical metallography during the past twenty-five years. Progress in Metallography The metallographic methods most widely used today, with the exception of the electron microscope, were firmly established more than twenty-five years ago. In general, our specimens were prepared for microscopic examination in those days in much the same manner as they are today. It is true that new details of technique have been introduced from time to time, and that superior equipment is available today, but on the whole, these improvements have been in the nature of refinements, often a matter of personal preference, and none can be considered essential to the attainment of the ultimate goal of the art and science of metallography, which is to reveal the structure of metallic specimens with unequivocal clarity so that they may be interpreted correctly. Mechanical metallographic polishing, which was the only method available in 1926, is still universally practiced and still consists of abrading the metallic specimen with a series of abrasives of increasing fineness until a specular surface is attained. We have now the alternative method of electropolishing, but it is not widely used because, except in a few special cases, its results are inferior to those of competent mechanical polishing. Likewise, most of the etching reagents preferred today were in common use more than twenty-five years ago and were applied in the same manner as they are today. Valuable improvements have been made in the optical and mechanical performance of metallurgical microscopes, but there was no dearth in those days of excellent instruments equipped with achromatic and apochromatic objectives capable of yielding micrographs comparable in quality with the best that we can make today. In fact, it would be a difficult task for any metallographer today to make optical micrographs at magnifications in excess of 3000 diameters that would surpass those made by Lucas more than twenty-five years ago. One of these is shown in Fig. 1. Yet, it is unquestionable that on the whole, the micrographs appearing in the metallurgical literature today are vastly superior to those
Jan 1, 1952